Results for 'opaque systems'

976 found
Order:
  1.  17
    Deference to Opaque Systems and Morally Exemplary Decisions.James Fritz - forthcoming - AI and Society:1-13.
    Many have recently argued that there are weighty reasons against making high-stakes decisions solely on the basis of recommendations from artificially intelligent (AI) systems. Even if deference to a given AI system were known to reliably result in the right action being taken, the argument goes, that deference would lack morally important characteristics: the resulting decisions would not, for instance, be based on an appreciation of right-making reasons. Nor would they be performed from moral virtue; nor would they have (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2. Of opaque oracles: epistemic dependence on AI in science poses no novel problems for social epistemology.Jakob Ortmann - 2025 - Synthese 205 (2):1-22.
    Deep Neural Networks (DNNs) are epistemically opaque in the sense that their inner functioning is often unintelligible to human investigators. Inkeri Koskinen has recently argued that this poses special problems for a widespread view in social epistemology according to which thick normative trust between researchers is necessary to handle opacity: if DNNs are essentially opaque, there simply exists nobody who could be trusted to understand all the aspects a DNN picks up during training. In this paper, I present (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Opaque Updates.Michael Cohen - 2020 - Journal of Philosophical Logic 50 (3):447-470.
    If updating with E has the same result across all epistemically possible worlds, then the agent has no uncertainty as to the behavior of the update, and we may call it a transparent update. If an agent is uncertain about the behavior of an update, we may call it opaque. In order to model the uncertainty an agent has about the result of an update, the same update must behave differently across different possible worlds. In this paper, I study (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4.  26
    Energy dependence of opaqueness for pp collisions at high energies.T. T. Chou - 1978 - Foundations of Physics 8 (5):319-328.
    Opaqueness of pp collisions is evaluated at three CERN-ISR energies. Comparisons with predictions of the factorizable eikonal models and the scaling hypothesis are made. It appears that results are in favor of the factorizable models.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - 2022 - AI and Society (2022):Online.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders.Ana Valdivia, Claudia Aradau, Tobias Blanke & Sarah Perret - 2022 - Big Data and Society 9 (2).
    In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  54
    Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.Ryan Marshall Felder - 2021 - Hastings Center Report 51 (4):38-45.
    The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  9.  40
    The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate the black-box abstraction (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  36
    A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision‐making.Robin Pierce, Sigrid Sterckx & Wim Van Biesen - 2021 - Bioethics 36 (2):113-120.
    The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This “semantic” black box adds to the “mathematical” black box present in many AI systems in which the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   67 citations  
  12.  19
    Moving Toward a Quasifederal System: Intergovernmental Relations in Italy.Giorgio Brosio - 2003 - Journal des Economistes Et des Etudes Humaines 13 (4).
    The paper illustrates the evolution of the territorial system of government in Italy. A traditionally centralized state is turning into a quasifederation. The reasons underpinning this transformation are the growing insatisfaction with the inefficiency of the central government, the quest for autonomy by the fastest growing regions and the opposition by the latter to the interterritorial redistribution of resources made by the central government with the use of nontransparent and inefficient instruments.A substantial degree of subnational tax autonomy has been reintroduced. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Disclosive Ethics and Information Technology: Disclosing Facial Recognition Systems.Lucas D. Introna - 2005 - Ethics and Information Technology 7 (2):75-86.
    This paper is an attempt to present disclosive ethics as a framework for computer and information ethics – in line with the suggestions by Brey, but also in quite a different manner. The potential of such an approach is demonstrated through a disclosive analysis of facial recognition systems. The paper argues that the politics of information technology is a particularly powerful politics since information technology is an opaque technology – i.e. relatively closed to scrutiny. It presents the design (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  14.  62
    Rights, systems of rights, and Unger's system of rights: Part. [REVIEW]Horst Eidenmüller - 1991 - Law and Philosophy 10 (1):1 - 28.
    Critical legal scholarship has so far been concerned primarily with trashing or deconstructing the belief clusters of "liberalism". Negative posturing of this kind is not the only feature of the movement, though. Roberto Unger has dreamt up a sociopolitical vision that presents an "empowered democracy". An important element of his "empowered democracy" is a new system of rights. Part 1 of my essay contains an analysis of the notion of a subjective right. I argue that both Hohfeld's fundamental legal conceptions (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  16.  99
    The “big red button” is too late: an alternative model for the ethical evaluation of AI systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
    As a way to address both ominous and ordinary threats of artificial intelligence, researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  17.  15
    A Rule-Based Solution to Opaque Medical Billing in the U.S.Christopher A. Bobier - 2024 - Journal of Law, Medicine and Ethics 52 (1):22-30.
    Patients and physicians do not know the cost of medical procedures. Opaque medical billing thus contributes to exorbitant, rising medical costs, burdening the healthcare system and individuals. After criticizing two proposed solutions to the problem of opaque medical billing, I argue that the Centers for Medicare and Medicaid Services should pursue a rule requiring that patients be informed by the physician of a reasonable out-of-pocket expense estimate for non-urgent procedures prior to services rendered.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  6
    Ethical approaches in designing autonomous and intelligent systems: a comprehensive survey towards responsible development.Anetta Jedličková - forthcoming - AI and Society:1-14.
    Over the past decade, significant progress in artificial intelligence (AI) has spurred the adoption of its algorithms, addressing previously daunting challenges. Alongside these remarkable strides, there has been a simultaneous increase in model complexity and reliance on opaque AI models, lacking transparency. In numerous scenarios, the systems themselves may necessitate making decisions entailing ethical dimensions. Consequently, it has become imperative to devise solutions to integrate ethical considerations into AI system development practices, facilitating broader utilization of AI systems (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  13
    Social System, Rationality and Revolution.Leszek Nowak & Marcin Paprzycki (eds.) - 1993 - Rodopi.
    Contents: Leszek NOWAK, Marcin PAPRZYCKI: Introduction. ON THE NATURE OF SOCIAL SYSTEM. Ulrich K. PREUSS: Political Order and Democracy. Carl Schmitt and his Influence. Katarzyna PAPRZYCKA: A Paradox in Hobbes' Philosophy of Law. Stephen L. ESQUITH: Democratic Political Dialogue. Edward JELINSKI: Democracy in Polish Reformist Socialist Thought. Katarzyna PAPRZYCKA: The Master and Slave Configuration in Hegel's System. Maurice GODELIER: Lévi-Strauss, Marx and After. A reappraisal of structuralist and Marxist tools for analyzing social logics. Krzysztof NIEDZWIADEK: On the Structure of Social (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  38
    Smart Sankey picturization for energy management systems in India.Anant Chandra & Satyajit Ghosh - 2020 - AI and Society 35 (2):401-407.
    India’s energy demand is predicted to rise by 135% within a span of 20 years. Coping up with surging energy demands requires several reforms in both renewable and non-renewable sectors. Factors such as rising population, reduction in the cost of renewable energy technology and their effect on the nation’s GDP, can make policy making a herculean task and the justification for such policies, quite opaque to the public. Artificial Intelligence technology can help decision makers to quickly draw conclusions from (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  6
    Sources of Opacity in Computer Systems: Towards a Comprehensive Taxonomy.Sara Mann, Barnaby Crook, Lena Kästner, Astrid Schomäcker & Timo Speith - 2023 - 2023 Ieee 31St International Requirements Engineering Conference Workshops (Rew):337-342.
    Modern computer systems are ubiquitous in contemporary life yet many of them remain opaque. This poses significant challenges in domains where desiderata such as fairness or accountability are crucial. We suggest that the best strategy for achieving system transparency varies depending on the specific source of opacity prevalent in a given context. Synthesizing and extending existing discussions, we propose a taxonomy consisting of eight sources of opacity that fall into three main categories: architectural, analytical, and socio-technical. For each (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  22.  28
    What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?Jordan Wadden - 2021 - Canadian Journal of Bioethics / Revue canadienne de bioéthique 4 (1):94-100.
    The prospect of including artificial intelligence (AI) in clinical decision-making is an exciting next step for some areas of healthcare. This article provides an analysis of the available kinds of AI systems, focusing on macro-level characteristics. This includes examining the strengths and weaknesses of opaque systems and fully explainable systems. Ultimately, the article argues that “grey box” systems, which include some combination of opacity and transparency, ought to be used in healthcare settings.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  23.  46
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  38
    Black-Box Testing and Auditing of Bias in ADM Systems.Tobias D. Krafft, Marc P. Hauer & Katharina Zweig - 2024 - Minds and Machines 34 (2):1-31.
    For years, the number of opaque algorithmic decision-making systems (ADM systems) with a large impact on society has been increasing: e.g., systems that compute decisions about future recidivism of criminals, credit worthiness, or the many small decision computing systems within social networks that create rankings, provide recommendations, or filter content. Concerns that such a system makes biased decisions can be difficult to investigate: be it by people affected, NGOs, stakeholders, governmental testing and auditing authorities, or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  12
    Modelling Accuracy and Trustworthiness of Explaining Agents.Alberto Termine, Giuseppe Primiero & Fabio Aurelio D’Asaro - 2021 - In Sujata Ghosh & Thomas Icard (eds.), Logic, Rationality, and Interaction: 8th International Workshop, Lori 2021, Xi’an, China, October 16–18, 2021, Proceedings. Springer Verlag. pp. 232-245.
    Current research in Explainable AI includes post-hoc explanation methods that focus on building transparent explaining agents able to emulate opaque ones. Such agents are naturally required to be accurate and trustworthy. However, what it means for an explaining agent to be accurate and trustworthy is far from being clear. We characterize accuracy and trustworthiness as measures of the distance between the formal properties of a given opaque system and those of its transparent explanantes. To this aim, we extend (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  27. What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28.  41
    On the Scope of the Right to Explanation.James Fritz - forthcoming - AI and Ethics.
    As opaque algorithmic systems take up a larger and larger role in shaping our lives, calls for explainability in various algorithmic systems have increased. Many moral and political philosophers have sought to vindicate these calls for explainability by developing theories on which decision-subjects—that is, individuals affected by decisions—have a moral right to the explanation of the systems that affect them. Existing theories tend to suggest that the right to explanation arises solely in virtue of facts about (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  32
    On Referential Opacity in Spinoza's Ethics.Kian Mintz-Woo - 2009 - Praxis 2 (2).
    In Spinoza’s system, the identity of mental modes and extended modes is suggested, but a formal argument for its truth is difficult to extract. One prima facie difficulty for the claim that mental and extended modes are identical is that substitution of co-referential terms in contexts which are specific to thought or extension fails to preserve truth value. Della Rocca has answered this challenge by claiming that Spinoza relies upon referentially opaque contexts. In this essay, I defend this solution (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  38
    Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI (...) understandable, rather than explainable. Yet, there is a grave lack of agreement concerning these terms in much of the literature on AI. We argue that the seminal distinction made by the philosopher and physician Karl Jaspers between different types of explaining and understanding in psychopathology can be used to promote greater conceptual clarity in the context of Machine Learning (ML). Following Jaspers, we claim that explaining and understanding constitute multi-faceted epistemic approaches that should not be seen as mutually exclusive, but rather as complementary ones as in and of themselves they are necessarily limited. Drawing on the famous example of Watson for Oncology we highlight how Jaspers’ methodology translates to the case of medical AI. Classical considerations from the philosophy of psychiatry can therefore inform a debate at the centre of current AI ethics, which in turn may be crucial for a successful implementation of ethically and legally sound AI in medicine. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  18
    What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2025 - Philosophical Studies 182 (1):55-85.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. Clinical Ethics Consultation in the United Kingdom.Sheila A. M. McLean - 2009 - Diametros 22:76 – 89.
    The system of clinical ethics committees (CECs) in the United Kingdom is based on goodwill. No formal requirements exist as to constitution, membership, range of expertise or the status of their recommendations. Healthcare professionals are not obliged to use CECs where they exist, nor to follow any advice received. In addition, the make-up of CECs suggests that ethics itself may be under-represented. In most cases, there is one member with a training in ethics – the rest are healthcare professionals or (...)
     
    Export citation  
     
    Bookmark   4 citations  
  34. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35.  21
    Exploring member trust in German community-supported agriculture: a multiple regression analysis.Felix Zoll, Caitlin K. Kirby, Kathrin Specht & Rosemarie Siebert - 2022 - Agriculture and Human Values 40 (2):709-724.
    Opaque value chains as well as environmental, ethical and health issues and food scandals are decreasing consumer trust in conventional agriculture and the dominant food system. As a result, critical consumers are increasingly turning to community-supported agriculture (CSA) to reconnect with producers and food. CSA is often perceived as a more sustainable, localized mode of food production, providing transparent production or social interaction between consumers and producers. This enables consumers to observe where their food is coming from, which means (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  32
    Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  15
    Bilinguals’ Sensitivity to Grammatical Gender Cues in Russian: The Role of Cumulative Input, Proficiency, and Dominance.Natalia Mitrofanova, Yulia Rodina, Olga Urek & Marit Westergaard - 2018 - Frontiers in Psychology 9.
    This paper reports on an experimental study investigating the acquisition of grammatical gender in Russian by heritage speakers living in Norway. The participants are 54 Norwegian-Russian bilingual children (4;0-10;2) as well as 107 Russian monolingual controls (3;0-7;0). Previous research has shown that grammatical gender is problematic for bilingual speakers, especially in cases where gender assignment is opaque (Schwartz et al., 2015; Polinsky, 2008; Rodina and Westergaard, 2017). Furthermore, factors such as proficiency and family type (one or two Russian-speaking parents) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  39. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  40. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  41.  23
    The Roots of Hermeneutics in Kant's Reflective-Teleological Judgment.Horst Ruthrof - 2022 - Springer Verlag.
    This book challenges the standard view that modern hermeneutics begins with Friedrich Ast and Friedrich Schleiermacher, arguing instead that it is the dialectic of reflective and teleological reason in Kant’s Critique of Judgment that provides the actual proto-hermeneutic foundation. It is revolutionary in doing so by replacing interpretive truth claims by the more appropriate claim of rendering opaque contexts intelligible. Taking Gadamer’s comprehensive analysis of hermeneutics in Truth and Method (1960) as its point of departure, the book turns to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  33
    Explainable AI and stakes in medicine: A user study.Sam Baron, Andrew James Latham & Somogy Varga - 2025 - Artificial Intelligence 340 (C):104282.
    The apparent downsides of opaque algorithms has led to a demand for explainable AI (XAI) methods by which a user might come to understand why an algorithm produced the particular output it did, given its inputs. Patients, for example, might find that the lack of explanation of the process underlying the algorithmic recommendations for diagnosis and treatment hinders their ability to provide informed consent. This paper examines the impact of two factors on user perceptions of explanations for AI (...) in medical contexts. The factors considered were the stakes of the decision—high versus low—and the decision source—human versus AI. 484 participants were presented with vignettes in which medical diagnosis and treatment plan recommendations were made by humans or by AI. Separate vignettes were used for high stakes scenarios involving life-threatening diseases, and low stakes scenarios involving mild diseases. In each vignette, an explanation for the decision was given. Four explanation types were tested across separate vignettes: no explanation, counterfactual, causal and a novel ‘narrative-based’ explanation, not previously considered. This yielded a total of 16 conditions, of which each participant saw only one. Individuals were asked to evaluate the explanations they received based on helpfulness, understanding, consent, reliability, trust, interests and likelihood of undergoing treatment. We observed a main effect for stakes on all factors and a main effect for decision source on all factors except for helpfulness and likelihood to undergo treatment. While we observed effects for explanation on helpfulness, understanding, consent, reliability, trust and interests, we by and large did not see any differences between the effects of explanation types. This suggests that the effectiveness of explanations may not depend on type of explanation but instead, on the stakes and decision source. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of algorithmic bias (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  22
    Beyond model interpretability: socio-structural explanations in machine learning.Andrew Smart & Atoosa Kasirzadeh - forthcoming - AI and Society:1-9.
    What is it to interpret the outputs of an opaque machine learning model? One approach is to develop interpretable machine learning techniques. These techniques aim to show how machine learning models function by providing either model-centric local or global explanations, which can be based on mechanistic interpretations (revealing the inner working mechanisms of models) or non-mechanistic approximations (showing input feature–output data relationships). In this paper, we draw on social philosophy to argue that interpreting machine learning outputs in certain normatively (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  52
    On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.Sune Holm - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-7.
    When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  46.  10
    LLMs beyond the lab: the ethics and epistemics of real-world AI research.Joost Mollen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Research under real-world conditions is crucial to the development and deployment of robust AI systems. Exposing large language models to complex use settings yields knowledge about their performance and impact, which cannot be obtained under controlled laboratory conditions or through anticipatory methods. This epistemic need for real-world research is exacerbated by large-language models’ opaque internal operations and potential for emergent behavior. However, despite its epistemic value and widespread application, the ethics of real-world AI research has received little scholarly (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   90 citations  
  48.  71
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  49.  49
    Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship.Florian Funer - 2022 - Philosophy and Technology 35 (1):1-20.
    The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  50.  44
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
1 — 50 / 976