Results for 'opacity model'

977 found
Order:
  1. Models, Parameterization, and Software: Epistemic Opacity in Computational Chemistry.Frédéric Wieber & Alexandre Hocquet - 2020 - Perspectives on Science 28 (5):610-629.
    . Computational chemistry grew in a new era of “desktop modeling,” which coincided with a growing demand for modeling software, especially from the pharmaceutical industry. Parameterization of models in computational chemistry is an arduous enterprise, and we argue that this activity leads, in this specific context, to tensions among scientists regarding the epistemic opacity transparency of parameterized methods and the software implementing them. We relate one flame war from the Computational Chemistry mailing List in order to assess in detail (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  9
    How Reference Works: Explanatory Models for Indexicals, Descriptions, and Opacity.Lawrence D. Roberts - 1993 - SUNY Press.
    If some aspects of human behavior are too murky to see into, others are too close and transparent to examine; one that has eluded both scientists and philosophers is how speakers of natural languages make words and expressions refer to specific objects in the world. Marshalling his expertise in philosophy, computers, and system science (State U. of.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Deep Learning Opacity in Scientific Discovery.Eamon Duede - 2023 - Philosophy of Science 90 (5):1089 - 1099.
    Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  4.  61
    On the Opacity of Deep Neural Networks.Anders Søgaard - 2023 - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5. The Logic of Opacity.Andrew Bacon & Jeffrey Sanford Russell - 2017 - Philosophy and Phenomenological Research 99 (1):81-114.
    We explore the view that Frege's puzzle is a source of straightforward counterexamples to Leibniz's law. Taking this seriously requires us to revise the classical logic of quantifiers and identity; we work out the options, in the context of higher-order logic. The logics we arrive at provide the resources for a straightforward semantics of attitude reports that is consistent with the Millian thesis that the meaning of a name is just the thing it stands for. We provide models to show (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  6. Probabilistically coherent credences despite opacity.Christian List - 2024 - Economics and Philosophy 40 (2):497-506.
    Real human agents, even when they are rational by everyday standards, sometimes assign different credences to objectively equivalent statements, such as ‘Orwell is a writer’ and ‘E.A. Blair is a writer’, or credences less than 1 to necessarily true statements, such as not-yet-proven theorems of arithmetic. Anna Mahtani calls this the phenomenon of ‘opacity’. Opaque credences seem probabilistically incoherent, which goes against a key modelling assumption of probability theory. I sketch a modelling strategy for capturing opaque credence assignments without (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  7.  34
    Simulation, Epistemic Opacity, and ‘Envirotechnical Ignorance’ in Nuclear Crisis.Tudor B. Ionescu - 2019 - Minds and Machines 29 (1):61-86.
    The Fukushima nuclear accident from 2011 provided an occasion for the public display of radiation maps generated using decision-support systems for nuclear emergency management. Such systems rely on computer models for simulating the atmospheric dispersion of radioactive materials and estimating potential doses in the event of a radioactive release from a nuclear reactor. In Germany, as in Japan, such systems are part of the national emergency response apparatus and, in case of accidents, they can be used by emergency task forces (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  28
    Ethics of Opacity in Harold Sonny Ladoo’s No Pain Like This Body.Shawn Gonzalez - 2018 - CLR James Journal 24 (1):215-237.
    Harold Sonny Ladoo’s 1972 novel No Pain Like This Body has been analyzed for its seminal representation of the traumas experienced by a formerly indentured Indo-Trinidadian family in the early twentieth century. However, relatively little attention has been given to Ladoo’s experimentation with multiple languages, particularly English, Trinidadian Creole, and Hindi. This article argues that Ladoo’s multilingualism offers a guide for approaching the traumatic experiences he represents. While some aspects of the novel, such as its glossary, make the characters’ language (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  80
    The value of opacity: A Bakhtinian analysis of Habermas's discourse ethics.T. Gregory Garvey - 2000 - Philosophy and Rhetoric 33 (4):370-390.
    The article focuses on the value of opacity in communication. Jurgen Habermas's and M.M. Bakhtin's attitudes toward transparent or undistorted communication define almost antithetical approaches to the relationship between public discourse and autonomy. Habermas, both in his theory of communicative action and in his discourse ethics, assumes that transparent communication is possible and actually makes transparency a necessary condition for the legitimation of social norms. Yet, there is a sense in which the same kind of transparency that offers the (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  12.  36
    Review of Roberts (1993): How Reference Works: Explanatory Models for Indexicals, Descriptions and Opacity[REVIEW]Michael J. Wreen - 1998 - Pragmatics and Cognition 6 (1-2):349-357.
  13.  14
    Opacity and Transparency.Daniel Hausknost - 2023 - Theoria: A Journal of Social and Political Theory 70 (177):26-53.
    I present the contours of an explanatory model of legitimacy that directs the focus away from normative questions and onto specific mechanisms of reality construction at play in constituting social orders. The key assumption informing the model is that stable orders rely fundamentally on their capacities to construct separate spheres of social reality, by which they exempt critical parts of reality from the burden of legitimation. I argue that an order's legitimacy ultimately depends on its ability to confine (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  98
    The Metaphysics of Opacity.Catharine Diehl & Beau Madison Mount - 2023 - Philosophers' Imprint 23 (1).
    This paper examines the logical and metaphysical consequences of denying Leibniz's Law, the principle that if t1= t2, then φ(t1) if and only if φ(t2). Recently, Caie, Goodman, and Lederman (2020) and Bacon and Russell (2019) have proposed sophisticated logical systems permitting violations of Leibniz's Law. We show that their systems conflict with widely held, attractive principles concerning the metaphysics of individuals. Only by adopting a highly revisionary picture, on which there is no finest-grained equivalence relation, can a well-motivated metaphysics (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  53
    Chess, Artificial Intelligence, and Epistemic Opacity.Paul Grünke - 2019 - Információs Társadalom 19 (4):7--17.
    In 2017 AlphaZero, a neural network-based chess engine shook the chess world by convincingly beating Stockfish, the highest-rated chess engine. In this paper, I describe the technical differences between the two chess engines and based on that, I discuss the impact of the modeling choices on the respective epistemic opacities. I argue that the success of AlphaZero’s approach with neural networks and reinforcement learning is counterbalanced by an increase in the epistemic opacity of the resulting model.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  38
    Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  18.  66
    Agency, freedom, and the blessings of opacity.Edwin C. Laurenson - 2011 - Zygon 46 (1):111-120.
    Abstract: How can the decisions of “autonomous” individuals provide a rationale for freedom and self-governance if a mechanical and causal sense of the self leads us to question the foundational nature of the individual? If most of our decisions originate in brain function below the level of consciousness, we live in a virtual world produced by mechanisms outside our control, arising from transparent self-models of which we are not aware. “Opacity,” the gift of not perceiving directly, of not automatically (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  19. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  54
    Diagnosing errors in climate model intercomparisons.Ryan O’Loughlin - 2023 - European Journal for Philosophy of Science 13 (2):1-29.
    I examine error diagnosis (model-model disagreement) in climate model intercomparisons including its difficulties, fruitful examples, and prospects for streamlining error diagnosis. I suggest that features of climate model intercomparisons pose a more significant challenge for error diagnosis than do features of individual model construction and complexity. Such features of intercomparisons include, e.g., the number of models involved, how models from different institutions interrelate, and what scientists know about each model. By considering numerous examples in (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21. How Values Shape the Machine Learning Opacity Problem.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation: Modeling in the Physical Sciences. New York, NY: Routledge. pp. 306-322.
    One of the main worries with machine learning model opacity is that we cannot know enough about how the model works to fully understand the decisions they make. But how much is model opacity really a problem? This chapter argues that the problem of machine learning model opacity is entangled with non-epistemic values. The chapter considers three different stages of the machine learning modeling process that corresponds to understanding phenomena: (i) model acceptance (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  47
    Standing on the Shoulders of Giants—And Then Looking the Other Way? Epistemic Opacity, Immersion, and Modeling in Hydraulic Engineering.Matthijs Kouw - 2016 - Perspectives on Science 24 (2):206-227.
    Over the course of the twentieth century, hydraulic engineering has come to rely primarily on the use of computational models. Disco and van den Ende hint towards the reasons for widespread adoption of computational models by pointing out that such models fulfill a crucial role as management tools in Dutch water management, and meet a more general desire to quantify water-related phenomena. The successful application of computational models implies blackboxing : “[w]hen a machine runs efficiently … one need focus only (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  10
    The Simulative Role of Neural Language Models in Brain Language Processing.Nicola Angius, Pietro Perconti, Alessio Plebe & Alessandro Acciai - 2024 - Philosophies 9 (5):137.
    This paper provides an epistemological and methodological analysis of the recent practice of using neural language models to simulate brain language processing. It is argued that, on the one hand, this practice can be understood as an instance of the traditional simulative method in artificial intelligence, following a mechanistic understanding of the mind; on the other hand, that it modifies the simulative method significantly. Firstly, neural language models are introduced; a study case showing how neural language models are being applied (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  7
    On the individuation of complex computational models: Gilbert Simondon and the technicity of AI.Susana Aires - forthcoming - AI and Society:1-14.
    The proliferation of AI systems across all domains of life as well as the complexification and opacity of algorithmic techniques, epitomised by the bourgeoning field of Deep Learning (DL), call for new methods in the Humanities for reflecting on the techno-human relation in a way that places the technical operation at its core. Grounded on the work of the philosopher of technology Gilbert Simondon, this paper puts forward individuation theory as a valuable approach to reflect on contemporary information technologies, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  26. Link Uncertainty, Implementation, and ML Opacity: A Reply to Tamir and Shech.Emily Sullivan - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation: Modeling in the Physical Sciences. New York, NY: Routledge. pp. 341-345.
    This chapter responds to Michael Tamir and Elay Shech’s chapter “Understanding from Deep Learning Models in Context.”.
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  41
    Testing Bottom-Up Models of Complex Citation Networks.Mark A. Bedau - 2014 - Philosophy of Science 81 (5):1131-1143.
    The robust behavior of the patent citation network is a complex target of recent bottom-up models in science. This paper investigates the purpose and testing of three especially simple bottom-up models of the citation count distribution observed in the patent citation network. The complex causal webs in the models generate weakly emergent patterns of behavior, and this explains both the need for empirical observation of computer simulations of the models and the epistemic harmlessness of the resulting epistemic opacity.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28.  15
    Deep learning models and the limits of explainable artificial intelligence.Jens Christian Bjerring, Jakob Mainz & Lauritz Munch - 2025 - Asian Journal of Philosophy 4 (1):1-26.
    It has often been argued that we face a trade-off between accuracy and opacity in deep learning models. The idea is that we can only harness the accuracy of deep learning models by simultaneously accepting that the grounds for the models’ decision-making are epistemically opaque to us. In this paper, we ask the following question: what are the prospects of making deep learning models transparent without compromising on their accuracy? We argue that the answer to this question depends on (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  19
    Pinyin Spelling Promotes Reading Abilities of Adolescents Learning Chinese as a Foreign Language: Evidence From Mediation Models.Huimin Xiao, Caihua Xu & Hetty Rusamy - 2020 - Frontiers in Psychology 11.
    Pinyin is a phonological encoding system used to spell modern Chinese Mandarin due to the phonological opacity of Chinese characters. The present study examined the role of Pinyin spelling in the reading abilities of adolescents learning Chinese as a foreign language. A total of 158 Indonesian senior primary students were tested on Pinyin spelling, character production, listening comprehension, depth of vocabulary knowledge, and reading comprehension. Pinyin spelling skills were assessed by two measures, Pinyin Dictation and Pinyin Tagging. Path analysis (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  87
    Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  32. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  33.  30
    Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  35.  45
    What Could Human Rights Do? A Decolonial Inquiry.Benjamin Davis - 2020 - Transmodernity 5 (9):1-22.
    It is one thing to consider what human rights have been and another to inquire into what they could be. In this essay, I present a history of human rights vis-à-vis decolonization. I follow the scholarship of Samuel Moyn to suggest that human rights presented a “moral alternative” to political utopias. The question remains how to politicize the moral energy around human rights today. I argue that defending what Édouard Glissant calls a “right to opacity” could politicize the ethical (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  36. Psychedelics and Meditation: A Neurophilosophical Perspective.Chris Letheby - 2022 - In Rick Repetti (ed.), Routledge Handbook on the Philosophy of Meditation. New York, NY: Routledge. pp. 209-223.
    Psychedelic ingestion and meditative practice are both ancient methods for altering consciousness that became widely known in Western society in the second half of the 20th century. Do the similarities begin and end there, or do these methods – as many have claimed over the years – share some deeper common elements? In this chapter I take a neurophilosophical approach to this question and argue that there are, indeed, deeper commonalities. Recent empirical studies show that psychedelics and meditation modulate overlapping (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Black-box artificial intelligence: an epistemological and critical analysis.Manuel Carabantes - 2020 - AI and Society 35 (2):309-317.
    The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  38.  20
    Artificial Epistemic Authorities.Rico Hauswald - forthcoming - Social Epistemology.
    While AI systems are increasingly assuming roles traditionally occupied by human epistemic authorities (EAs), their epistemological status remains unclear. This paper aims to address this lacuna by assessing the potential for AI systems to be recognized as artificial epistemic authorities. In a first step, I examine the arguments against considering AI systems as EAs, in particular the established model of EAs as engaging in intentional belief transfer via testimony to laypeople – a process seemingly inapplicable to intentionless and beliefless (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  20
    The Multidimensional Epistemology of Computer Simulations: Novel Issues and the Need to Avoid the Drunkard’s Search Fallacy.Cyrille Imbert - 2019 - In Claus Beisbart & Nicole J. Saam (eds.), Computer Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives. Springer Verlag. pp. 1029-1055.
    Computers have transformed science and help to extend the boundaries of human knowledge. However, does the validation and diffusion of results of computational inquiries and computer simulations call for a novel epistemological analysis? I discuss how the notion of novelty should be cashed out to investigate this issue meaningfully and argue that a consequentialist framework similar to the one used by Goldman to develop social epistemologySocial epistemology can be helpful at this point. I highlight computational, mathematical, representational, and social stages (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  41.  44
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  42.  24
    A Socially Constructive Social Contract: The Need for Coalitions in Corrective Justice.Nina Windgaetter - 2017 - Dissertation, University of Michigan
    In my dissertation, I argue that the enterprise of corrective justice requires answering questions about what is unjust and how we ought to set and pursue corrective justice goals. To answer these questions in a way that will allow us to correct for the persistent and entrenched injustices which result from processes of stratification in our society, I’ll put forward a two-tiered social contract theory, which will allow us to approach these questions in a way that will capture the agreement (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  18
    Mannequins and Spirits: Representation and Resistance of Siberian Shamans.Thomas R. Miller - 1999 - Anthropology of Consciousness 10 (4):69-80.
    In the early 20th century anthropologists collected sounds, images and artifacts to represent traditional cultures. Under the direction of Franz Boas, anthropologists working for the American Museum of Natural History's JesupNorth Pacific Expedition documented a variety of northeastern Siberian shamanisms. Demonstrations staged for the phonograph and the camera served as models for museum representations. These ethnographic inscriptions, together with the collection of texts and sacred objects, documented shamanistic traditions; yet ceremonial traditions remained partially obscured, resisting full intelligibility. The complexity of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  36
    Epistemologie der Iteration. Gedankenexperimente und Simulationsexperimente.Johannes Lenhard - 2011 - Deutsche Zeitschrift für Philosophie 59 (1):131-145.
    Thought experiments and simulation experiments are compared and contrasted with each other. While the former rely on epistemic transparency as a working condition, in the latter complexity of model dynamics leads to epistemic opacity. The difference is elucidated by a discussion of the different kinds of iteration that are at work in both sorts of experiment.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  45.  53
    Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  10
    The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.Nils Freyer, Dominik Groß & Myriam Lipprandt - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  81
    Expert judgment in climate science: How it is used and how it can be justified.Mason Majszak & Julie Jebeile - 2023 - Studies in History and Philosophy of Science 100 (C):32-38.
    Like any science marked by high uncertainty, climate science is characterized by a widespread use of expert judgment. In this paper, we first show that, in climate science, expert judgment is used to overcome uncertainty, thus playing a crucial role in the domain and even at times supplanting models. One is left to wonder to what extent it is legitimate to assign expert judgment such a status as an epistemic superiority in the climate context, especially as the production of expert (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  15
    Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading.Bo Hee Min & Christian Borch - 2022 - Big Data and Society 9 (2).
    Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  7
    Continental philosophy and the Palestinian question: beyond the Jew and the Greek.Zahi Zalloua - 2017 - New York: Bloomsbury Academic, an imprint of Bloomsbury Publishing PIc.
    From Sartre to Levinas, continental philosophers have looked to the example of the Jew as the paradigmatic object of and model for ethical inquiry. Levinas, for example, powerfully dedicates his 1974 book Otherwise than Being to the victims of the Holocaust, and turns attention to the state of philosophy after Auschwitz. Such an ethics radically challenges prior notions of autonomy and comprehension-two key ideas for traditional ethical theory and, more generally, the Greek tradition. It seeks to respect the (...) of the other and avoid the dangers of hermeneutic violence. But how does such an ethics of the other translate into real, everyday life? What is at stake in thinking the other as Jew? Is the alterity of the Jew simply a counter to Greek universalism? Is a rhetoric of exceptionalism, with its unavoidable ontological residue, at odds with shifting political realities? Within this paradigm, what then becomes of the Arab or Muslim, the other of the Jew, the other of the other, so to speak? This line of ethical thought-in its desire to bear witness to past suffering and come to terms with subjectivity after Auschwitz-arguably brackets from analysis present operations of power. Would, then, a more sensitive historical approach expose the Palestinian as the other of the Israeli? Here, Zahi Zalloua offers a challenging intervention into how we configure the contemporary. (shrink)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  23
    Explanation–Question–Response dialogue: An argumentative tool for explainable AI.Federico Castagna, Peter McBurney & Simon Parsons - 2024 - Argument and Computation:1-23.
    Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 977