Results for 'AI systems'

970 found
Order:
  1.  7
    How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia 52 (4):1083-1106.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2.  31
    Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings.Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim & Begoña Garcia-Zapirain - forthcoming - AI and Society:1-16.
    Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  97
    AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective.Francesca Lagioia & Giovanni Sartor - 2020 - Philosophy and Technology 33 (3):433-465.
    Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4. AI Systems and Respect for Human Autonomy.Arto Laitinen & Otto Sahlgren - 2021 - Frontiers in Artificial Intelligence.
    This study concerns the sociotechnical bases of human autonomy. Drawing on recent literature on AI ethics, philosophical literature on dimensions of autonomy, and on independent philosophical scrutiny, we first propose a multi-dimensional model of human autonomy and then discuss how AI systems can support or hinder human autonomy. What emerges is a philosophically motivated picture of autonomy and of the normative requirements personal autonomy poses in the context of algorithmic systems. Ranging from consent to data collection and processing, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  5. AI systems must not confuse users about their sentience or moral status.Eric Schwitzgebel - 2023 - Patterns 4.
    One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  6.  36
    Autonomous AI Systems in Conflict: Emergent Behavior and Its Impact on Predictability and Reliability.Daniel Trusilo - 2023 - Journal of Military Ethics 22 (1):2-17.
    The development of complex autonomous systems that use artificial intelligence (AI) is changing the nature of conflict. In practice, autonomous systems will be extensively tested before being operationally deployed to ensure system behavior is reliable in expected contexts. However, the complexity of autonomous systems means that they will demonstrate emergent behavior in the open context of real-world conflict environments. This article examines the novel implications of emergent behavior of autonomous AI systems designed for conflict through two (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  8.  91
    Evidentiality.A. I︠U︡ Aĭkhenvalʹd - 2004 - New York: Oxford University Press.
    In some languages every statement must contain a specification of the type of evidence on which it is based: for example, whether the speaker saw it, or heard it, or inferred it from indirect evidence, or learnt it from someone else. This grammatical reference to information source is called 'evidentiality', and is one of the least described grammatical categories. Evidentiality systems differ in how complex they are: some distinguish just two terms (eyewitness and noneyewitness, or reported and everything else), (...)
    Direct download  
     
    Export citation  
     
    Bookmark   49 citations  
  9. Varieties of transparency: exploring agency within AI systems.Gloria Andrada, Robert William Clowes & Paul Smart - 2023 - AI and Society 38 (4):1321-1331.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater _transparency_ from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires _seeing through_ the artefact or device, widespread calls for transparency imply _seeing into_ different aspects of AI systems. These two notions are in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  10.  54
    Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.Ryan Marshall Felder - 2021 - Hastings Center Report 51 (4):38-45.
    The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  11. How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  12. Can AI systems have free will?Christian List - manuscript
    While there has been much discussion of whether AI systems could function as moral agents or acquire sentience, there has been relatively little discussion of whether AI systems could have free will. In this article, I sketch a framework for thinking about this question. I argue that, to determine whether an AI system has free will, we should not look for some mysterious property, expect its underlying algorithms to be indeterministic, or ask whether the system is unpredictable. Rather, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  46
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  21
    Civics and Moral Education in Singapore: lessons for citizenship education?Joy Ai - 1998 - Journal of Moral Education 27 (4):505-524.
    Civics and Moral Educationwas implemented as a new moral education programme in Singapore schools in 1992. This paper argues that the underlying theme is that of citizenship training and that new measures are under way to strengthen the capacity of the school system to transmit national values for economic and political socialisation. The motives and motivation for retaining a formal moral education programme have remained strong. A discussion of the structure and content of key modules in Civics and Moral Education (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  15.  26
    Development Practices of Trusted AI Systems among Canadian Data Scientists.Jinnie Shin, Okan Bulut & Mark J. Gierl - 2020 - International Review of Information Ethics 28.
    The introduction of Artificial Intelligence systems has demonstrated impeccable potential and benefits to enhance the decision-making processes in our society. However, despite the successful performance of AI systems to date, skepticism and concern remain regarding whether AI systems could form a trusting relationship with human users. Developing trusted AI systems requires careful consideration and evaluation of its reproducibility, interpretability, and fairness, which in in turn, poses increased expectations and responsibilities for data scientists. Therefore, the current study (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Moral Responsibility for AI Systems.Sander Beckers - forthcoming - Advances in Neural Information Processing Systems 36 (Neurips 2023).
    As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware -- in some form or other -- (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - 2024 - Journal of Ethics and Social Philosophy 27 (2).
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  46
    Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - 2020 - In Christopher Burr & Luciano Floridi (eds.), Ethics of digital well-being: a multidisciplinary approach. Springer.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  20. Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  21.  53
    Responsibility of AI Systems.Mehdi Dastani & Vahid Yazdanpanah - 2023 - AI and Society 38 (2):843-852.
    To support the trustworthiness of AI systems, it is essential to have precise methods to determine what or who is to account for the behaviour, or the outcome, of AI systems. The assignment of responsibility to an AI system is closely related to the identification of individuals or elements that have caused the outcome of the AI system. In this work, we present an overview of approaches that aim at modelling responsibility of AI systems, discuss their advantages (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  6
    (1 other version)Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures Through AI Systems.Alex John London & Hoda Heidari - 2024 - Minds and Machines 34 (4):1-37.
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum’s capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders’ ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  72
    Moral control and ownership in AI systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems (AIS) are increasingly being used in multiple applications and receiving more (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  10
    AI systems and the question of African personhood.Diana-Abasi Ibanga - forthcoming - AI and Society:1-11.
    Increasing number of African philosophers are engaging artificial intelligence (AI) especially in relation to the question of robotic persons. While some argue in support, others argue against robotic personhood. There are two dominant theories of personhood—humanist and posthumanist accounts—in African moral context. Both theories agree that being a human is insufficient condition to be recognized as a person. My interest in this article is to show how the two theories might support or deny robotic personhood. The question is: what are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  99
    The “big red button” is too late: an alternative model for the ethical evaluation of AI systems.Thomas Arnold & Matthias Scheutz - 2018 - Ethics and Information Technology 20 (1):59-69.
    As a way to address both ominous and ordinary threats of artificial intelligence, researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  26.  31
    The meta-ontology of AI systems with human-level intelligence.Roman Krzanowski & Pawel Polak - 2022 - Zagadnienia Filozoficzne W Nauce 73:197-230.
    In this paper, we examine the meta-ontology of AI systems with human-level intelligence, with us denoting such AI systems as AI E. Meta-ontology in philosophy is a discourse centered on ontology, ontological commitment, and the truth condition of ontological theories. We therefore discuss how meta-ontology is conceptualized for AI E systems. We posit that the meta-ontology of AI E systems is not concerned with computational representations of reality in the form of structures, data constructs, or computational (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  21
    Conserving involution in residuated structures.Ai-ni Hsieh & James G. Raftery - 2007 - Mathematical Logic Quarterly 53 (6):583-609.
    This paper establishes several algebraic embedding theorems, each of which asserts that a certain kind of residuated structure can be embedded into a richer one. In almost all cases, the original structure has a compatible involution, which must be preserved by the embedding. The results, in conjunction with previous findings, yield separative axiomatizations of the deducibility relations of various substructural formal systems having double negation and contraposition axioms. The separation theorems go somewhat further than earlier ones in the literature, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  28.  20
    An interdisciplinary account of the terminological choices by EU policymakers ahead of the final agreement on the AI Act: AI system, general purpose AI system, foundation model, and generative AI.David Fernández-Llorca, Emilia Gómez, Ignacio Sánchez & Gabriele Mazzini - forthcoming - Artificial Intelligence and Law:1-14.
    The European Union’s Artificial Intelligence Act (AI Act) is a groundbreaking regulatory framework that integrates technical concepts and terminology from the rapidly evolving ecosystems of AI research and innovation into the legal domain. Precise definitions accessible to both AI experts and lawyers are crucial for the legislation to be effective. This paper provides an interdisciplinary analysis of the concepts of AI system, general purpose AI system, foundation model and generative AI across the different versions of the legal text (Commission proposal, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29.  20
    Africa, ChatGPT, and Generative AI Systems: Ethical Benefits, Concerns, and the Need for Governance.Kutoma Wakunuma & Damian Eke - 2024 - Philosophies 9 (3):80.
    This paper examines the impact and implications of ChatGPT and other generative AI technologies within the African context while looking at the ethical benefits and concerns that are particularly pertinent to the continent. Through a robust analysis of ChatGPT and other generative AI systems using established approaches for analysing the ethics of emerging technologies, this paper provides unique ethical benefits and concerns for these systems in the African context. This analysis combined approaches such as anticipatory technology ethics (ATE), (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30.  50
    New Pythias of public administration: ambiguity and choice in AI systems as challenges for governance.Fernando Filgueiras - 2022 - AI and Society 37 (4):1473-1486.
    As public administrations adopt artificial intelligence (AI), we see this transition has the potential to transform public service and public policies, by offering a rapid turnaround on decision making and service delivery. However, a recent series of criticisms have pointed to problematic aspects of mainstreaming AI systems in public administration, noting troubled outcomes in terms of justice and values. The argument supplied here is that any public administration adopting AI systems must consider and address ambiguities and uncertainties surrounding (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31.  79
    The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems.Jakob Mökander, Margi Sheth, David S. Watson & Luciano Floridi - 2023 - Minds and Machines 33 (1):221-248.
    Organisations that design and deploy artificial intelligence (AI) systems increasingly commit themselves to high-level, ethical principles. However, there still exists a gap between principles and practices in AI ethics. One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope. Put differently, the question to which systems and processes AI ethics principles ought to apply remains unanswered. Of course, there exists no universally accepted definition of AI, and different systems (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  32
    Manifestations of xenophobia in AI systems.Nenad Tomasev, Jonathan Leader Maynard & Iason Gabriel - forthcoming - AI and Society:1-23.
    Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  36
    Can AI systems become wise? A note on artificial wisdom.Ana Sinha & Pooja Lakhanpal - forthcoming - AI and Society:1-2.
  34. Bioinformatics advances in saliva diagnostics.Ji-Ye Ai, Barry Smith & David T. W. Wong - 2012 - International Journal of Oral Science 4 (2):85--87.
    There is a need recognized by the National Institute of Dental & Craniofacial Research and the National Cancer Institute to advance basic, translational and clinical saliva research. The goal of the Salivaomics Knowledge Base (SKB) is to create a data management system and web resource constructed to support human salivaomics research. To maximize the utility of the SKB for retrieval, integration and analysis of data, we have developed the Saliva Ontology and SDxMart. This article reviews the informatics advances in saliva (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35.  50
    Can Robotic AI Systems Be Virtuous and Why Does This Matter?Mihaela Constantinescu & Roger Crisp - 2022 - International Journal of Social Robotics 14 (6):1547–1557.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  36.  93
    The Rebugnant Conclusion: Utilitarianism, Insects, Microbes, and AI Systems.Jeff Sebo - 2023 - Ethics, Policy and Environment 26 (2):249-264.
    This paper considers questions that small animals and AI systems raise for utilitarianism. Specifically, if these beings have more welfare than humans and other large animals, then utilitarianism implies that we should prioritize them, all else equal. This could lead to a ‘rebugnant conclusion’, according to which we should, say, create large populations of small animals rather than small populations of large animals. It could also lead to a ‘Pascal’s bugging’, according to which we should, say, prioritize large populations (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  37. Ethical assessments and mitigation strategies for biases in AI-systems used during the COVID-19 pandemic.Alicia De Manuel, Janet Delgado, Parra Jonou Iris, Txetxu Ausín, David Casacuberta, Maite Cruz Piqueras, Ariel Guersenzvaig, Cristian Moyano, David Rodríguez-Arias, Jon Rueda & Angel Puyol - 2023 - Big Data and Society 10 (1).
    The main aim of this article is to reflect on the impact of biases related to artificial intelligence (AI) systems developed to tackle issues arising from the COVID-19 pandemic, with special focus on those developed for triage and risk prediction. A secondary aim is to review assessment tools that have been developed to prevent biases in AI systems. In addition, we provide a conceptual clarification for some terms related to biases in this particular context. We focus mainly on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  23
    Do AI systems Allow Online Advertisers to Control Others?Gabriel De Marco & T. Douglas - 2024 - In David Edmonds (ed.), AI Morality. Oxford: Oxford University Press USA.
  39.  1
    The Turing Test and the Issue of Trust in AI Systems.Paweł Stacewicz & Krzysztof Sołoducha - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):353-364.
    The Turing test, which is a verbal test of the indistinguishability of machine and human intelligence, is a historically important idea that has set a way of thinking about the AI (artificial intelligence) project that is still relevant today. According to it, the benchmark/blueprint for AI is human intelligence, and the key skill of AI should be its communicative proficiency – which includes explaining decisions made by the machine. Passing the original Turing test by a machine does not guarantee that (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Feminist AI: Can We Expect Our AI Systems to Become Feminist?Galit Wellner & Tiran Rothman - 2020 - Philosophy and Technology 33 (2):191-205.
    The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  41.  50
    Integrating AI ethics in wildlife conservation AI systems in South Africa: a review, challenges, and future research agenda.Irene Nandutu, Marcellin Atemkeng & Patrice Okouma - 2023 - AI and Society 38 (1):245-257.
    With the increased use of Artificial Intelligence (AI) in wildlife conservation, issues around whether AI-based monitoring tools in wildlife conservation comply with standards regarding AI Ethics are on the rise. This review aims to summarise current debates and identify gaps as well as suggest future research by investigating (1) current AI Ethics and AI Ethics issues in wildlife conservation, (2) Initiatives Stakeholders in AI for wildlife conservation should consider integrating AI Ethics in wildlife conservation. We find that the existing literature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  40
    Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  17
    Remote Agent: to boldly go where no AI system has gone before.Nicola Muscettola, P. Pandurang Nayak, Barney Pell & Brian C. Williams - 1998 - Artificial Intelligence 103 (1-2):5-47.
  44. Legal personhood for the integration of AI systems in the social context: a study hypothesis.Claudio Novelli - forthcoming - AI and Society:1-13.
    In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  42
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  8
    Patient Consent and The Right to Notice and Explanation of AI Systems Used in Health Care.Meghan E. Hurley, Benjamin H. Lang, Kristin Marie Kostick-Quenet, Jared N. Smith & Jennifer Blumenthal-Barby - forthcoming - American Journal of Bioethics:1-13.
    Given the need for enforceable guardrails for artificial intelligence (AI) that protect the public and allow for innovation, the U.S. Government recently issued a Blueprint for an AI Bill of Rights which outlines five principles of safe AI design, use, and implementation. One in particular, the right to notice and explanation, requires accurately informing the public about the use of AI that impacts them in ways that are easy to understand. Yet, in the healthcare setting, it is unclear what goal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  4
    Logical Contradictions and Moral-legal Paradoxes at the Intersection of Scientometrics and Ethics of Scientific Publications (Oxymorons “Self-Pillage” and “Self-Theft” in the AI-System Called “Anti-Plagiarism”).В. О Лобовиков - 2024 - Siberian Journal of Philosophy 21 (3):5-19.
    The subject matter of research is contradictions and moral-legal antinomies arising in the philosophy of science, in relation to a set of technologies called “Anti-Plagiarism”. The formal-logical and formal-axiological aspects of the notions “property”, “common property”, “private property”, “theft”, “plundering” and others are considered. The paper argues for the urgent necessity to allow authors unlimited reuse of any fragments of their previously published texts in their new publications actually containing novel scientific results. The condition is that such duplication is indispensable (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49. Towards an assessment of an AI system's validity by a TURING test.Rainer Knauf, Ilka Philippow & Avelino J. Gonzalez - forthcoming - Flairs {97, Proc. Florida Ai Research Symposium, Daytona Beach, Fl, Usa, May 11 {14, 1997, 397 {401. Florida Ai Research Society.
     
    Export citation  
     
    Bookmark  
  50.  89
    The social acceptability of AI systems: Legitimacy, epistemology and marketing. [REVIEW]Romain Laufer - 1992 - AI and Society 6 (3):197-220.
    The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the artificial and the radical forms of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 970