Related

Contents
221 found
Order:
1 — 50 / 221
Material to categorize
  1. The Collapse of Predictive Compression_ Why Probabilistic Intelligence Fails Without Prime-Chiral Resonance.Devin Bostick - manuscript
    Abstract -/- The current paradigm in artificial intelligence relies on probabilistic compression and entropy optimization. While powerful in reactive domains, these models fundamentally fail to produce coherent, deterministic intelligence. They approximate output without encoding the structural causes of cognition, leading to instability across recursion, contradiction, and long-range coherence. -/- This paper introduces prime-chiral resonance (PCR) as the lawful substrate underpinning structured emergence. PCR replaces probability with phase-aligned intelligence, where signals are selected not by likelihood but by resonance with deterministic coherence (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. The Global Brain Argument: Nodes, Computroniums and the AI Megasystem (Target Paper for Special Issue).Susan Schneider - forthcoming - Disputatio.
    The Global Brain Argument contends that many of us are, or will be, part of a global brain network that includes both biological and artificial intelligences (AIs), such as generative AIs with increasing levels of sophistication. Today’s internet ecosystem is but a hodgepodge of fairly unintegrated programs, but it is evolving by the minute. Over time, technological improvements will facilitate smarter AIs and faster, higher-bandwidth information transfer and greater integration between devices in the internet-of-things. The Global Brain (GB) Argument says (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Order, Disorder, and Criticality: Advanced Problems of Phase Transition Theory (8th edition).Yurij Holavatch (ed.) - 2024 - World Scientific Press.
    The field of neuroscience and the development of artificial neural networks (ANNs) have mutually influenced each other, drawing from and contributing to many concepts initially developed in statistical mechanics. Notably, Hopfield networks and Boltzmann machines are versions of the Ising model, a model extensively studied in statistical mechanics for over a century. In the first part of this chapter, we provide an overview of the principles, models, and applications of ANNs, highlighting their connections to statistical mechanics and statistical learning theory. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Security practices in AI development.Petr Spelda & Vit Stritecky - forthcoming - AI and Society.
    What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. On Explaining the Success of Induction.Tom F. Sterkenburg - 2025 - British Journal for the Philosophy of Science 76 (1):75-93.
    Douven observes that Schurz’s meta-inductive justification of induction cannot explain the great empirical success of induction, and offers an explanation based on computer simulations of the social and evolutionary development of our inductive practices. In this article, I argue that Douven’s account does not address the explanatory question that Schurz’s argument leaves open, and that the assumption of the environment’s induction-friendliness that is inherent to Douven’s simulations is not justified by Schurz’s argument.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Laws of nature as results of a trade-off — Rethinking the Humean trade-off conception.Niels Linnemann & Robert Michels - forthcoming - Philosophical Quarterly.
    According to the standard Humean account of laws of nature, laws are selected partly as a result of an optimal trade-off between the scientific virtues of simplicity and strength. Roberts and Woodward have recently objected that such trade-offs play no role in how laws are chosen in science. In this paper, we first discuss an example from the field of automated scientific discovery which provides concrete support for Roberts and Woodward’s point that scientific theories are chosen based on a single-virtue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  7. Learning incommensurate concepts.Hayley Clatterbuck & Hunter Gentry - 2025 - Synthese 205 (3):1-36.
    A central task of developmental psychology and philosophy of science is to show how humans learn radically new concepts. Famously, Fodor has argued that such learning is impossible if concepts have definitional structure and all learning is hypothesis testing. We present several learning processes that can generate novel concepts. They yield transformations of the fundamental feature space, generating new similarity structures which can underlie conceptual change. This framework provides a tractable, empiricist-friendly account that unifies and shores up various strands of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. From the Fair Distribution of Predictions to the Fair Distribution of Social Goods: Evaluating the Impact of Fair Machine Learning on Long-Term Unemployment.Sebastian Zezulka & Genin Konstantin - 2024 - Facct '24: Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency 2024:1984--2006.
    Deploying an algorithmically informed policy is a significant intervention in society. Prominent methods for algorithmic fairness focus on the distribution of predictions at the time of training, rather than the distribution of social goods that arises after deploying the algorithm in a specific social context. However, requiring a ‘fair’ distribution of predictions may undermine efforts at establishing a fair distribution of social goods. First, we argue that addressing this problem requires a notion of prospective fairness that anticipates the change in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Diving into Fair Pools: Algorithmic Fairness, Ensemble Forecasting, and the Wisdom of Crowds.Rush T. Stewart & Lee Elkin - forthcoming - Analysis.
    Is the pool of fair predictive algorithms fair? It depends, naturally, on both the criteria of fairness and on how we pool. We catalog the relevant facts for some of the most prominent statistical criteria of algorithmic fairness and the dominant approaches to pooling forecasts: linear, geometric, and multiplicative. Only linear pooling, a format at the heart of ensemble methods, preserves any of the central criteria we consider. Drawing on work in the social sciences and social epistemology on the theoretical (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Leveraging Al for Cognitive Self-Engineering: A Framework for Externalized Intelligence.P. Sati - manuscript
    This paper explores a novel methodology for utilizing artificial intelligence (Al), specifically large language models (LLMs) like ChatGPT, as an external cognitive augmentation tool. By integrating recursive self-analysis, structured thought expansion, and Al-facilitated selfmodification, individuals can enhance cognitive efficiency, accelerate self-improvement, and systematically refine their intellectual and psychological faculties. This approach builds on theories of extended cognition, recursive intelligence, and cognitive bias mitigation, demonstrating Al’s potential as a structured self-engineering framework. The implications extend to research, strategic decision-making, therapy, and personal (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Construct Validity in Automated Counterterrorism Analysis.Adrian K. Yee - 2025 - Philosophy of Science 92 (1):1-18.
    Governments and social scientists are increasingly developing machine learning methods to automate the process of identifying terrorists in real time and predict future attacks. However, current operationalizations of “terrorist”’ in artificial intelligence are difficult to justify given three issues that remain neglected: insufficient construct legitimacy, insufficient criterion validity, and insufficient construct validity. I conclude that machine learning methods should be at most used for the identification of singular individuals deemed terrorists and not for identifying possible terrorists from some more general (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. A Capability Approach to AI Ethics.Emanuele Ratti & Mark Graves - 2025 - American Philosophical Quarterly 62 (1):1-16.
    We propose a conceptualization and implementation of AI ethics via the capability approach. We aim to show that conceptualizing AI ethics through the capability approach has two main advantages for AI ethics as a discipline. First, it helps clarify the ethical dimension of AI tools. Second, it provides guidance to implementing ethical considerations within the design of AI tools. We illustrate these advantages in the context of AI tools in medicine, by showing how ethics-based auditing of AI tools in medicine (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Can Computers Reason Like Medievals? Building ‘Formal Understanding’ into the Chinese Room.Lassi Saario-Ramsay - 2024 - In Alexander D. Carruth, Heidi Haanila, Paavo Pylkkänen & Pii Telakivi, True Colors, Time After Time: Essays Honoring Valtteri Arstila. Turku: University of Turku. pp. 332–358.
  14. Tool, Collaborator, or Participant: AI and Artistic Agency.Anthony Cross - forthcoming - British Journal of Aesthetics.
    Artificial intelligence is now capable of generating sophisticated and compelling images from simple text prompts. In this paper, I focus specifically on how artists might make use of AI to create art. Most existing discourse analogizes AI to a tool or collaborator; this focuses our attention on AI’s contribution to the production of an artistically significant output. I propose an alternative approach, the exploration paradigm, which suggests that artists instead relate to AI as a participant: artists create a space for (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. AI-enhanced nudging: A Risk-factors Analysis.Marianna Bergamaschi Ganapini & Enrico Panai - forthcoming - American Philosophical Quarterly.
    Artificial intelligent technologies are utilized to provide online personalized recommendations, suggestions, or prompts that can influence people's decision-making processes. We call this AI-enhanced nudging (or AI-nudging for short). Contrary to the received wisdom we claim that AI-enhanced nudging is not necessarily morally problematic. To start assessing the risks and moral import of AI-nudging we believe that we should adopt a risk-factor analysis: we show that both the level of risk and possibly the moral value of adopting AI-nudging ultimately depend on (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  16. The case for human–AI interaction as system 0 thinking.Marianna Bergamaschi Ganapini - 2024 - Nature Human Behaviour 8.
    The rapid integration of these artificial intelligence (AI) tools into our daily lives is reshaping how we think and make decisions. We propose that data-driven AI systems, by transcending individual artefacts and interfacing with a dynamic, multiartefact ecosystem, constitute a distinct psychological system. We call this ‘system 0’, and position it alongside Kahneman’s system 1 (fast, intuitive thinking) and system 2 (slow, analytical thinking).System 0 represents the outsourcing of certain cognitive tasks to AI, which can process vast amounts of data (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Ambiguous Decisions in Bayesianism and Imprecise Probability.Mantas Radzvilas, William Peden & Francesco De Pretis - 2024 - British Journal for the Philosophy of Science Short Reads.
    Do imprecise beliefs lead to worse decisions under uncertainty? This BJPS Short Reads article provides an informal introduction to our use of agent-based modelling to investigate this question. We explain the strengths of imprecise probabilities for modelling evidential states. We explain how we used an agent-based model to investigate the relative performance of Imprecise Bayesian reasoners against a standard Bayesian who has precise credences. We found that the very features of Imprecise Bayesianism which give it representational strengths also cause relative (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. A comparison of imprecise Bayesianism and Dempster–Shafer theory for automated decisions under ambiguity.Mantas Radzvilas, William Peden, Daniele Tortoli & Francesco De Pretis - forthcoming - Journal of Logic and Computation.
    Ambiguity occurs insofar as a reasoner lacks information about the relevant physical probabilities. There are objections to the application of standard Bayesian inductive logic and decision theory in contexts of significant ambiguity. A variety of alternative frameworks for reasoning under ambiguity have been proposed. Two of the most prominent are Imprecise Bayesianism and Dempster–Shafer theory. We compare these inductive logics with respect to the Ambiguity Dilemma, which is a problem that has been raised for Imprecise Bayesianism. We develop an agent-based (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19. Why ChatGPT Doesn’t Think: An Argument from Rationality.Daniel Stoljar & Zhihe Vincent Zhang - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Can AI systems such as ChatGPT think? We present an argument from rationality for the negative answer to this question. The argument is founded on two central ideas. The first is that if ChatGPT thinks, it is not rational, in the sense that it does not respond correctly to its evidence. The second idea, which appears in several different forms in philosophical literature, is that thinkers are by their nature rational. Putting the two ideas together yields the result that ChatGPT (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Learnability of state spaces of physical systems is undecidable.Petr Spelda & Vit Stritecky - 2024 - Journal of Computational Science 83 (December 2024):1-7.
    Despite an increasing role of machine learning in science, there is a lack of results on limits of empirical exploration aided by machine learning. In this paper, we construct one such limit by proving undecidability of learnability of state spaces of physical systems. We characterize state spaces as binary hypothesis classes of the computable Probably Approximately Correct learning framework. This leads to identifying the first limit for learnability of state spaces in the agnostic setting. Further, using the fact that finiteness (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini, Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  24. Interpretable and accurate prediction models for metagenomics data.Edi Prifti, Antoine Danchin, Jean-Daniel Zucker & Eugeni Belda - 2020 - Gigascience 9 (3):giaa010.
    Background: Microbiome biomarker discovery for patient diagnosis, prognosis, and risk evaluation is attracting broad interest. Selected groups of microbial features provide signatures that characterize host disease states such as cancer or cardio-metabolic diseases. Yet, the current predictive models stemming from machine learning still behave as black boxes and seldom generalize well. Their interpretation is challenging for physicians and biologists, which makes them difficult to trust and use routinely in the physician-patient decision-making process. Novel methods that provide interpretability and biological insight (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Why and how to construct an epistemic justification of machine learning?Petr Spelda & Vit Stritecky - 2024 - Synthese 204 (2):1-24.
    Consider a set of shuffled observations drawn from a fixed probability distribution over some instance domain. What enables learning of inductive generalizations which proceed from such a set of observations? The scenario is worthwhile because it epistemically characterizes most of machine learning. This kind of learning from observations is also inverse and ill-posed. What reduces the non-uniqueness of its result and, thus, its problematic epistemic justification, which stems from a one-to-many relation between the observations and many learnable generalizations? The paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Towards a Definition of Generative Artificial Intelligence.Raphael Ronge, Markus Maier & Benjamin Rathgeber - 2025 - Philosophy and Technology 38 (31):1-25.
    The concept of Generative Artificial Intelligence (GenAI) is ubiquitous in the public and semi-technical domain, yet rarely defined precisely. We clarify main concepts that are usually discussed in connection to GenAI and argue that one ought to distinguish between the technical and the public discourse. In order to show its complex development and associated conceptual ambiguities, we offer a historical-systematic reconstruction of GenAI and explicitly discuss two exemplary cases: the generative status of the Large Language Model BERT and the differences (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek, Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  29. The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  30. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Does the no miracles argument apply to AI?Darrell P. Rowbottom, William Peden & André Curtis-Trudel - 2024 - Synthese 203 (173):1-20.
    According to the standard no miracles argument, science’s predictive success is best explained by the approximate truth of its theories. In contemporary science, however, machine learning systems, such as AlphaFold2, are also remarkably predictively successful. Thus, we might ask what best explains such successes. Might these AIs accurately represent critical aspects of their targets in the world? And if so, does a variant of the no miracles argument apply to these AIs? We argue for an affirmative answer to these questions. (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Conversations with Chatbots.P. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul, Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Data over dialogue: Why artificial intelligence is unlikely to humanise medicine.Joshua Hatherley - 2024 - Dissertation, Monash University
    Recently, a growing number of experts in artificial intelligence (AI) and medicine have be-gun to suggest that the use of AI systems, particularly machine learning (ML) systems, is likely to humanise the practice of medicine by substantially improving the quality of clinician-patient relationships. In this thesis, however, I argue that medical ML systems are more likely to negatively impact these relationships than to improve them. In particular, I argue that the use of medical ML systems is likely to comprise the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35. SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
    Explainable AI (xAI) methods are important for establishing trust in using black-box models. However, recent criticism has mounted against current xAI methods that they disagree, are necessarily false, and can be manipulated, which has started to undermine the deployment of black-box models. Rudin (2019) goes so far as to say that we should stop using black-box models altogether in high-stakes cases because xAI explanations ‘must be wrong’. However, strict fidelity to the truth is historically not a desideratum in science. Idealizations (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Understanding Biology in the Age of Artificial Intelligence.Adham El Shazly, Elsa Lawerence, Srijit Seal, Chaitanya Joshi, Matthew Greening, Pietro Lio, Shantung Singh, Andreas Bender & Pietro Sormanni - manuscript
    Modern life sciences research is increasingly relying on artificial intelligence (AI) approaches to model biological systems, primarily centered around the use of machine learning (ML) models. Although ML is undeniably useful for identifying patterns in large, complex data sets, its widespread application in biological sciences represents a significant deviation from traditional methods of scientific inquiry. As such, the interplay between these models and scientific understanding in biology is a topic with important implications for the future of scientific research, yet it (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Machina sapiens.Nello Cristianini - 2024 - Bologna: Il Mulino -.
    Machina sapiens - l;algoritmo che ci ha rubato il segreto della conoscenza. -/- Le macchine possono pensare? Questa domanda inquietante, posta da Alan Turing nel 1950, ha forse trovato una risposta: oggi si può conversare con un computer senza poterlo distinguere da un essere umano. I nuovi agenti intelligenti come ChatGPT si sono rivelati capaci di svolgere compiti che vanno molto oltre le intenzioni iniziali dei loro creatori, e ancora non sappiamo perché: se sono stati addestrati per alcune abilità, altre (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. From ethics to epistemology and back again: informativeness and epistemic injustice in explanatory medical machine learning.Giorgia Pozzi & Juan M. Durán - forthcoming - AI and Society:1-12.
    In this paper, we discuss epistemic and ethical concerns brought about by machine learning (ML) systems implemented in medicine. We begin by fleshing out the logic underlying a common approach in the specialized literature (which we call the _informativeness account_). We maintain that the informativeness account limits its analysis to the impact of epistemological issues on ethical concerns without assessing the bearings that ethical features have on the epistemological evaluation of ML systems. We argue that according to this methodological approach, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Responding to the Watson-Sterkenburg debate on clustering algorithms and natural kinds.Warmhold Jan Thomas Mollema - manuscript
    In Philosophy and Technology 36, David Watson discusses the epistemological and metaphysical implications of unsupervised machine learning (ML) algorithms. Watson is sympathetic to the epistemological comparison of unsupervised clustering, abstraction and generative algorithms to human cognition and sceptical about ML’s mechanisms having ontological implications. His epistemological commitments are that we learn to identify “natural kinds through clustering algorithms”, “essential properties via abstraction algorithms”, and “unrealized possibilities via generative models” “or something very much like them.” The same issue contains a commentary (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  40. Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Performance Comparison and Implementation of Bayesian Variants for Network Intrusion Detection.Tosin Ige & Christopher Kiekintveld - 2023 - Proceedings of the IEEE 1:5.
    Bayesian classifiers perform well when each of the features is completely independent of the other which is not always valid in real world applications. The aim of this study is to implement and compare the performances of each variant of the Bayesian classifier (Multinomial, Bernoulli, and Gaussian) on anomaly detection in network intrusion, and to investigate whether there is any association between each variant’s assumption and their performance. Our investigation showed that each variant of the Bayesian algorithm blindly follows its (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  42. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  43. Operationalising Representation in Natural Language Processing.Jacqueline Harding - 2023 - British Journal for the Philosophy of Science.
    Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  44. Encoder-Decoder Based Long Short-Term Memory (LSTM) Model for Video Captioning.Adewale Sikiru, Tosin Ige & Bolanle Matti Hafiz - forthcoming - Proceedings of the IEEE:1-6.
    This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  45. Making decisions with evidential probability and objective Bayesian calibration inductive logics.Mantas Radzvilas, William Peden & Francesco De Pretis - forthcoming - International Journal of Approximate Reasoning:1-37.
    Calibration inductive logics are based on accepting estimates of relative frequencies, which are used to generate imprecise probabilities. In turn, these imprecise probabilities are intended to guide beliefs and decisions — a process called “calibration”. Two prominent examples are Henry E. Kyburg's system of Evidential Probability and Jon Williamson's version of Objective Bayesianism. There are many unexplored questions about these logics. How well do they perform in the short-run? Under what circumstances do they do better or worse? What is their (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. Can AI Abstract the Architecture of Mathematics?Posina Rayudu - manuscript
    The irrational exuberance associated with contemporary artificial intelligence (AI) reminds me of Charles Dickens: "it was the age of foolishness, it was the epoch of belief" (cf. Nature Editorial, 2016; to get a feel for the vanity fair that is AI, see Mitchell and Krakauer, 2023; Stilgoe, 2023). It is particularly distressing—feels like yet another rerun of Seinfeld, which is all about nothing (pun intended); we have seen it in the 60s and again in the 90s. AI might have had (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. (1 other version)Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - manuscript
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Real Sparks of Artificial Intelligence and the Importance of Inner Interpretability.Alex Grzankowski - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. On Philomatics and Psychomatics for Combining Philosophy and Psychology with Mathematics.Benyamin Ghojogh & Morteza Babaie - manuscript
    We propose the concepts of philomatics and psychomatics as hybrid combinations of philosophy and psychology with mathematics. We explain four motivations for this combination which are fulfilling the desire of analytical philosophy, proposing science of philosophy, justifying mathematical algorithms by philosophy, and abstraction in both philosophy and mathematics. We enumerate various examples for philomatics and psychomatics, some of which are explained in more depth. The first example is the analysis of relation between the context principle, semantic holism, and the usage (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. From deep learning to rational machines: what the history of philosophy can teach us about the future of artifical intelligence.Cameron J. Buckner - 2024 - New York, NY: Oxford University Press.
    This book provides a framework for thinking about foundational philosophical questions surrounding machine learning as an approach to artificial intelligence. Specifically, it links recent breakthroughs in deep learning to classical empiricist philosophy of mind. In recent assessments of deep learning's current capabilities and future potential, prominent scientists have cited historical figures from the perennial philosophical debate between nativism and empiricism, which primarily concerns the origins of abstract knowledge. These empiricists were generally faculty psychologists; that is, they argued that the active (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
1 — 50 / 221