Results for 'algorithmic trust'

976 found
Order:
  1.  34
    Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs.Zeineb Sassi, Michael Hahn, Sascha Eickmann, Anne Herrmann-Johns & Max Tretter - 2024 - Journal of Medical Ethics 50 (2):139-139.
    In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  3. Algorithmic Decision-Making, Agency Costs, and Institution-Based Trust.Keith Dowding & Brad R. Taylor - 2024 - Philosophy and Technology 37 (2):1-22.
    Algorithm Decision Making (ADM) systems designed to augment or automate human decision-making have the potential to produce better decisions while also freeing up human time and attention for other pursuits. For this potential to be realised, however, algorithmic decisions must be sufficiently aligned with human goals and interests. We take a Principal-Agent (P-A) approach to the questions of ADM alignment and trust. In a broad sense, ADM is beneficial if and only if human principals can trust (...) agents to act faithfully on their behalf. This mirrors the challenge of facilitating P-A relationships among humans, but the peculiar nature of human-machine interaction also raises unique issues. The problem of asymmetric information is omnipresent but takes a different form in the context of ADM. Although the decision-making machinery of an algorithmic agent can in principle be laid bare for all to see, the sheer complexity of ADM systems based on deep learning models prevents straightforward monitoring. We draw on literature from economics and political science to argue that the problem of trust in ADM systems should be addressed at the level of institutions. Although the dyadic relationship between human principals and algorithmic agents is our ultimate concern, cooperation at this level must rest against an institutional environment which allows humans to effectively evaluate and choose among algorithmic alternatives. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  4. Establishing Trust in Algorithmic Results: Ground Truth Simulations and the First Empirical Images of a Black Hole.Paula Muhr - 2024 - In Michael Resch, Nico Formanek, Joshy Ammu & Andreas Kaminski, Science and the Art of Simulation: Trust in Science. Springer. pp. 189–204.
    When the first empirical images of a black hole’s shadow were released in April 2019, they transformed this defining black hole feature from a theoretical into an explorable physical entity. But although derived from empirical measurements, the production of these images relied on the deployment of the algorithmic pipelines designed specifically for this purpose to enable the selection of optimal imaging parameters. How could the researchers involved trust their imaging pipelines to deliver faithful reconstructions of unknown images from (...)
     
    Export citation  
     
    Bookmark   1 citation  
  5.  73
    Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management.Min Kyung Lee - 2018 - Big Data and Society 5 (1).
    Algorithms increasingly make managerial decisions that people used to make. Perceptions of algorithms, regardless of the algorithms' actual performance, can significantly influence their adoption, yet we do not fully understand how people perceive decisions made by algorithms as compared with decisions made by humans. To explore perceptions of algorithmic management, we conducted an online experiment using four managerial decisions that required either mechanical or human skills. We manipulated the decision-maker, and measured perceived fairness, trust, and emotional response. With (...)
    Direct download  
     
    Export citation  
     
    Bookmark   46 citations  
  6.  2
    Trust in artificial intelligence: a survey experiment to assess trust in algorithmic decision-making.Ferenc Orbán & Ádám Stefkovics - forthcoming - AI and Society:1-15.
    Artificial intelligence (AI) has seen rapid development over the past decade, leading to its integration into various aspects of human life. The ability to integrate AI systems hinges not solely on their technical efficacy but also on the perceptions held by users or decision-makers. Previous researches indicate that many people harbor concerns about AI, which can hinder the adoption of these technologies. This study uses a pre-registered survey experiment embedded in an online survey in Hungary (N = 2100) to assess (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  86
    Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  8.  58
    Exploring the roles of trust and social group preference on the legitimacy of algorithmic decision-making vs. human decision-making for allocating COVID-19 vaccinations.Marco Lünich & Kimon Kieslich - forthcoming - AI and Society:1-19.
    In combating the ongoing global health threat of the COVID-19 pandemic, decision-makers have to take actions based on a multitude of relevant health data with severe potential consequences for the affected patients. Because of their presumed advantages in handling and analyzing vast amounts of data, computer systems of algorithmic decision-making are implemented and substitute humans in decision-making processes. In this study, we focus on a specific application of ADM in contrast to human decision-making, namely the allocation of COVID-19 vaccines (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  9. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   69 citations  
  10. Algorithm exploitation: humans are keen to exploit benevolent AI.Jurgis Karpus, Adrian Krüger, Julia Tovar Verba, Bahador Bahrami & Ophelia Deroy - 2021 - iScience 24 (6):102679.
    We cooperate with other people despite the risk of being exploited or hurt. If future artificial intelligence (AI) systems are benevolent and cooperative toward us, what will we do in return? Here we show that our cooperative dispositions are weaker when we interact with AI. In nine experiments, humans interacted with either another human or an AI agent in four classic social dilemma economic games and a newly designed game of Reciprocity that we introduce here. Contrary to the hypothesis that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11.  86
    In Algorithms We Trust: Magical Thinking, Superintelligent Ai and Quantum Computing.Nathan Schradle - 2020 - Zygon 55 (3):733-747.
    This article analyzes current attitudes toward artificial intelligence (AI) and quantum computing and argues that they represent a modern‐day form of magical thinking. It proposes that AI and quantum computing are thus excellent examples of the ways that traditional distinctions between religion, science, and magic fail to account for the vibrancy and energy that surround modern technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. The philosophical basis of algorithmic recourse.Suresh Venkatasubramanian & Mark Alfano - forthcoming - Fairness, Accountability, and Transparency Conference 2020.
    Philosophers have established that certain ethically important val- ues are modally robust in the sense that they systematically deliver correlative benefits across a range of counterfactual scenarios. In this paper, we contend that recourse – the systematic process of reversing unfavorable decisions by algorithms and bureaucracies across a range of counterfactual scenarios – is such a modally ro- bust good. In particular, we argue that two essential components of a good life – temporally extended agency and trust – are (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  13.  35
    Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):370-379.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  14. The Delusional Hedge Algorithm as a Model of Human Learning From Diverse Opinions.Yun-Shiuan Chuang, Xiaojin Zhu & Timothy T. Rogers - 2025 - Topics in Cognitive Science 17 (1):73-87.
    Whereas cognitive models of learning often assume direct experience with both the features of an event and with a true label or outcome, much of everyday learning arises from hearing the opinions of others, without direct access to either the experience or the ground-truth outcome. We consider how people can learn which opinions to trust in such scenarios by extending the hedge algorithm: a classic solution for learning from diverse information sources. We first introduce a semi-supervised variant we call (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  30
    The ABC of algorithmic aversion: not agent, but benefits and control determine the acceptance of automated decision-making.Gabi Schaap, Tibor Bosse & Paul Hendriks Vettehen - forthcoming - AI and Society:1-14.
    While algorithmic decision-making (ADM) is projected to increase exponentially in the coming decades, the academic debate on whether people are ready to accept, trust, and use ADM as opposed to human decision-making is ongoing. The current research aims at reconciling conflicting findings on ‘algorithmic aversion’ in the literature. It does so by investigating algorithmic aversion while controlling for two important characteristics that are often associated with ADM: increased benefits (monetary and accuracy) and decreased user control. Across (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. Will big data algorithms dismantle the foundations of liberalism?Daniel First - 2018 - AI and Society 33 (4):545-556.
    In Homo Deus, Yuval Noah Harari argues that technological advances of the twenty-first century will usher in a significant shift in how humans make important life decisions. Instead of turning to the Bible or the Quran, to the heart or to our therapists, parents, and mentors, people will turn to Big Data recommendation algorithms to make these choices for them. Much as we rely on Spotify to recommend music to us, we will soon rely on algorithms to decide our careers, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17.  39
    Perceptions of Justice By Algorithms.Gizem Yalcin, Erlis Themeli, Evert Stamhuis, Stefan Philipsen & Stefano Puntoni - 2023 - Artificial Intelligence and Law 31 (2):269-292.
    Artificial Intelligence and algorithms are increasingly able to replace human workers in cognitively sophisticated tasks, including ones related to justice. Many governments and international organizations are discussing policies related to the application of algorithmic judges in courts. In this paper, we investigate the public perceptions of algorithmic judges. Across two experiments (N = 1,822), and an internal meta-analysis (N = 3,039), our results show that even though court users acknowledge several advantages of algorithms (i.e., cost and speed), they (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  18.  44
    “I don’t think people are ready to trust these algorithms at face value”: trust and the use of machine learning algorithms in the diagnosis of rare disease.Angeliki Kerasidou, Christoffer Nellåker, Aurelia Sauerbrei, Shirlene Badger & Nina Hallowell - 2022 - BMC Medical Ethics 23 (1):1-14.
    BackgroundAs the use of AI becomes more pervasive, and computerised systems are used in clinical decision-making, the role of trust in, and the trustworthiness of, AI tools will need to be addressed. Using the case of computational phenotyping to support the diagnosis of rare disease in dysmorphology, this paper explores under what conditions we could place trust in medical AI tools, which employ machine learning.MethodsSemi-structured qualitative interviews with stakeholders who design and/or work with computational phenotyping systems. The method (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19.  39
    Social context of the issue of discriminatory algorithmic decision-making systems.Daniel Varona & Juan Luis Suarez - 2024 - AI and Society 39 (6):2799-2811.
    Algorithmic decision-making systems have the potential to amplify existing discriminatory patterns and negatively affect perceptions of justice in society. There is a need for a revision of mechanisms to address discrimination in light of the unique challenges presented by these systems, which are not easily auditable or explainable. Research efforts to bring fairness to ADM solutions should be viewed as a matter of justice and trust among actors should be ensured through technology design. Ideas that move us to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse.Emily Sullivan & Philippe Verreault-Julien - forthcoming - Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES’22).
    People are increasingly subject to algorithmic decisions, and it is generally agreed that end-users should be provided an explanation or rationale for these decisions. There are different purposes that explanations can have, such as increasing user trust in the system or allowing users to contest the decision. One specific purpose that is gaining more traction is algorithmic recourse. We first pro- pose that recourse should be viewed as a recommendation problem, not an explanation problem. Then, we argue (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Real Attribute Learning Algorithm.Julio Michael Stern, Marcelo de Souza Lauretto, Fabio Nakano & Celma de Oliveira Ribeiro - 1998 - ISAS-SCI’98 2:315-321.
    This paper presents REAL, a Real-Valued Attribute Classification Tree Learning Algorithm. Several of the algorithm's unique features are explained by úe users' demands for a decision support tool to be used for evaluating financial operations strategies. Compared to competing algorithms, in our applications, REAL presents maj or advantages : (1) The REAL classification trees usually have smaller error rates. (2) A single conviction (or trust) measure at each leaf is more convenient than the traditional (probability, confidence-level) pair. (3) No (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  19
    Qualitative Simulation Algorithm for Resource Scheduling in Enterprise Management Cloud Mode.Jiaohui Yu - 2021 - Complexity 2021:1-12.
    Aiming at the problem of resource scheduling optimization in enterprise management cloud mode, a customizable fuzzy clustering cloud resource scheduling algorithm based on trust sensitivity is proposed. Firstly, on the one hand, a fuzzy clustering method is used to divide cloud resource scheduling into two aspects: cloud user resource scheduling and cloud task resource scheduling. On the other hand, a trust-sensitive mechanism is introduced into cloud task scheduling to prevent malicious node attacks or dishonest recommendation from node providers. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  61
    How do people judge the credibility of algorithmic sources?Donghee Shin - 2022 - AI and Society 37 (1):81-96.
    The exponential growth of algorithms has made establishing a trusted relationship between human and artificial intelligence increasingly important. Algorithm systems such as chatbots can play an important role in assessing a user’s credibility on algorithms. Unless users believe the chatbot’s information is credible, they are not likely to be willing to act on the recommendation. This study examines how literacy and user trust influence perceptions of chatbot information credibility. Results confirm that algorithmic literacy and users’ trust play (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  24.  49
    Effects of Moral Violation on Algorithmic Transparency: An Empirical Investigation.Muhammad Umair Shah, Umair Rehman, Bidhan Parmar & Inara Ismail - 2024 - Journal of Business Ethics 193 (1):19-34.
    Workers can be fired from jobs, citizens sent to jail, and adolescents more likely to experience depression, all because of algorithms. Algorithms have considerable impacts on our lives. To increase user satisfaction and trust, the most common proposal from academics and developers is to increase the transparency of algorithmic design. While there is a large body of literature on algorithmic transparency, the impact of unethical data collection practices is less well understood. Currently, there is limited research on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  50
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  44
    Fairness, explainability and in-between: understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system.Doron Kliger, Tsvi Kuflik & Avital Shulner-Tal - 2022 - Ethics and Information Technology 24 (1).
    In light of the widespread use of algorithmic (intelligent) systems across numerous domains, there is an increasing awareness about the need to explain their underlying decision-making process and resulting outcomes. Since oftentimes these systems are being considered as black boxes, adding explanations to their outcomes may contribute to the perception of their transparency and, as a result, increase users’ trust and fairness perception towards the system, regardless of its actual fairness, which can be measured using various fairness tests (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  12
    From pen to algorithm: optimizing legislation for the future with artificial intelligence.Guzyal Hill, Matthew Waddington & Leon Qiu - forthcoming - AI and Society:1-12.
    This research poses the question of whether it is possible to optimize modern legislative drafting by integrating LLM-based systems into the lawmaking process to address the pervasive challenge of misinformation and disinformation in the age of AI. While misinformation is not a novel phenomenon, with the proliferation of social media and AI, disseminating false or misleading information has become a pressing societal concern, undermining democratic processes, public trust, and social cohesion. AI can be used to proliferate disinformation and misinformation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  76
    Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms.Lily Morse, Mike Horia M. Teodorescu, Yazeed Awwad & Gerald C. Kane - 2021 - Journal of Business Ethics 181 (4):1083-1095.
    Recent advances in machine learning methods have created opportunities to eliminate unfairness from algorithmic decision making. Multiple computational techniques (i.e., algorithmic fairness criteria) have arisen out of this work. Yet, urgent questions remain about the perceived fairness of these criteria and in which situations organizations should use them. In this paper, we seek to gain insight into these questions by exploring fairness perceptions of five algorithmic criteria. We focus on two key dimensions of fairness evaluations: distributive fairness (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29.  36
    Nowotny, Helga (2021). In AI we trust: power, illusion and control of predictive algorithms, Polity, Cambridge, UK, ISBN-13: 978-1509548811. [REVIEW]Karamjit S. Gill - 2022 - AI and Society 37 (1):411-414.
  30. The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
    In this paper I use philosophical accounts on the relationship between trust and knowledge in science to apprehend this relationship on the Web. I argue that trust and knowledge are fundamentally entangled in our epistemic practices. Yet despite this fundamental entanglement, we do not trust blindly. Instead we make use of knowledge to rationally place or withdraw trust. We use knowledge about the sources of epistemic content as well as general background knowledge to assess epistemic claims. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  31.  63
    When Trust is Zero Sum: Automation's Threat to Epistemic Agency.Emmie Malone, Saleh Afroogh, Jason D'Cruz & Kush Varshney - forthcoming - Ethics and Information Technology.
    AI researchers and ethicists have long worried about the threat that automation poses to human dignity, autonomy, and to the sense of personal value that is tied to work. Typically, proposed solutions to this problem focus on ways in which we can reduce the number of job losses which result from automation, ways to retrain those that lose their jobs, or ways to mitigate the social consequences of those job losses. However, even in cases where workers keep their jobs, their (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  32. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  96
    Trust criteria for artificial intelligence in health: normative and epistemic considerations.Kristin Kostick-Quenet, Benjamin H. Lang, Jared Smith, Meghan Hurley & Jennifer Blumenthal-Barby - 2024 - Journal of Medical Ethics 50 (8):544-551.
    Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34.  47
    Trust and Power in Airbnb’s Digital Rating and Reputation System.Tim Christiaens - 2025 - Ethics and Information Technology (2):1-13.
    Customer ratings and reviews are playing a key role in the contemporary platform economy. To establish trust among stran- gers without having to directly monitor platform users themselves, companies ask people to evaluate each other. Firms like Uber, Deliveroo, or Airbnb construct digital reputation scores by combining these consumer data with their own information from the algorithmic surveillance of workers. Trustworthy behavior is subsequently rewarded with a good reputation score and higher potential earnings, while untrustworthy behavior can be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. A matter of trust: : Higher education institutions as information fiduciaries in an age of educational data mining and learning analytics.Kyle M. L. Jones, Alan Rubel & Ellen LeClere - forthcoming - JASIST: Journal of the Association for Information Science and Technology.
    Higher education institutions are mining and analyzing student data to effect educational, political, and managerial outcomes. Done under the banner of “learning analytics,” this work can—and often does—surface sensitive data and information about, inter alia, a student’s demographics, academic performance, offline and online movements, physical fitness, mental wellbeing, and social network. With these data, institutions and third parties are able to describe student life, predict future behaviors, and intervene to address academic or other barriers to student success (however defined). Learning (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  36. In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  37. Engineering the trust machine. Aligning the concept of trust in the context of blockchain applications.Eva Pöll - 2024 - Ethics and Information Technology 26 (2):1-16.
    Complex technology has become an essential aspect of everyday life. We rely on technology as part of basic infrastructure and repeatedly for tasks throughout the day. Yet, in many cases the relation surpasses mere reliance and evolves to trust in technology. A new, disruptive technology is blockchain. It claims to introduce trustless relationships among its users, aiming to eliminate the need for trust altogether—even being described as “the trust machine”. This paper presents a proposal to adjust the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38.  26
    How Transparency Modulates Trust in Artificial Intelligence.John Zerilli, Umang Bhatt & Adrian Weller - 2022 - Patterns 3 (4):1-10.
    We review the literature on how perceiving an AI making mistakes violates trust and how such violations might be repaired. In doing so, we discuss the role played by various forms of algorithmic transparency in the process of trust repair, including explanations of algorithms, uncertainty estimates, and performance metrics.
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  39. Deepfakes and trust in technology.Oliver Laas - 2023 - Synthese 202 (5):1-34.
    Deepfakes are fake recordings generated by machine learning algorithms. Various philosophical explanations have been proposed to account for their epistemic harmfulness. In this paper, I argue that deepfakes are epistemically harmful because they undermine trust in recording technology. As a result, we are no longer entitled to our default doxastic attitude of believing that P on the basis of a recording that supports the truth of P. Distrust engendered by deepfakes changes the epistemic status of recordings to resemble that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   56 citations  
  41. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  42.  47
    How Implicit Assumptions on the Nature of Trust Shape the Understanding of the Blockchain Technology.Mattis Jacobs - 2020 - Philosophy and Technology 34 (3):573-587.
    The role that trust plays in blockchain-based systems is understood and portrayed in various manners. The blockchain technology is said to enable and establish trust as well as to redirect it, to substitute for it, and to make it obsolete. Furthermore, there is disagreement on whom or what users have to trust when using the blockchain technology: code, math, algorithms, and machines, or still human actors. This paper hypothesizes that the divergences of the depictions largely rest on (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  43. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  44.  61
    Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  45.  40
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46. Engineering Trustworthiness in the Online Environment.Hugh Desmond - 2023 - In David Collins, Iris Vidmar Jovanović, Mark Alfano & Hale Demir-Doğuoğlu, The Moral Psychology of Trust. Lexington Books. pp. 215-237.
    Algorithm engineering is sometimes portrayed as a new 21st century return of manipulative social engineering. Yet algorithms are necessary tools for individuals to navigate online platforms. Algorithms are like a sensory apparatus through which we perceive online platforms: this is also why individuals can be subtly but pervasively manipulated by biased algorithms. How can we better understand the nature of algorithm engineering and its proper function? In this chapter I argue that algorithm engineering can be best conceptualized as a type (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  40
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
    This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  22
    ‘Can I trust my patient?’ Machine Learning support for predicting patient behaviour.Florian Funer & Sabine Salloch - 2023 - Journal of Medical Ethics 49 (8):543-544.
    Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Toward a Philosophy of Blockchain: A Symposium: Introduction.Melanie Swan & Primavera de Filippi - 2017 - Metaphilosophy 48 (5):603-619.
    This article introduces the symposium “Toward a Philosophy of Blockchain,” which provides a philosophical contemplation of blockchain technology, the digital ledger software underlying cryptocurrencies such as bitcoin, for the secure transfer of money, assets, and information via the Internet without needing a third-party intermediary. The symposium offers philosophical scholarship on a new topic, blockchain technology, from a variety of perspectives. The philosophical themes discussed include mathematical models of reality, signification, and the sociopolitical institutions that structure human life and interaction. The (...)
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   8 citations  
1 — 50 / 976