Results for 'artificial agent'

954 found
Order:
  1.  63
    Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  2.  96
    Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  3. Artificial agents - personhood in law and philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it is not necessary or (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  37
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of (...) agents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  5. Artificial agents and their moral nature.Luciano Floridi - 2014 - In Peter Kroes (ed.), The moral status of technical artefacts. Springer. pp. 185–212.
    Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues (...)
     
    Export citation  
     
    Bookmark   2 citations  
  6. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   299 citations  
  7. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  8. Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  9.  74
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading and commitment (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  11.  47
    Demonstrating sensemaking emergence in artificial agents: A method and an example.Olivier L. Georgeon & James B. Marshall - 2013 - International Journal of Machine Consciousness 5 (2):131-144.
    We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  70
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  13.  38
    The Epistemological Foundations of Artificial Agents.Nick J. Lacey & M. H. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies for low-level (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  17. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the ethical criteria (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  18.  56
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  52
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading and commitment (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  20. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology containing (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21.  45
    Artificial agents in social cognitive sciences.Thierry Chaminade & Jessica K. Hodgins - 2006 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 7 (3):347-353.
  22. Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   63 citations  
  23.  16
    On social laws for artificial agent societies: off-line design.Yoav Shoham & Moshe Tennenholtz - 1995 - Artificial Intelligence 73 (1-2):231-252.
  24.  38
    Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  25. (1 other version)The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such (...)
     
    Export citation  
     
    Bookmark   1 citation  
  26.  11
    Can artificial agents act? Conceptual costellation for a de-humanized theory of action.Francesco Striano - 2024 - Scienza E Filosofia 31:224-244.
    Can artificial agents act? Conceptual constellation for a de-humanised theory of action This paper embarks on an exploration of the concept of agency, traditionally ascribed to humans, in the context of artificial intelligence (AI). In the first two sections, it challenges the conventional dichotomy of human agency and non- human instrumentality, arguing that advancements in technology have blurred these boundaries. In the third section, the paper introduces the reader to the philosophical perspective of new materialism, which assigns causal (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  27
    Biologically Inspired Emotional Expressions for Artificial Agents.Beáta Korcsok, Veronika Konok, György Persa, Tamás Faragó, Mihoko Niitsuma, Ádám Miklósi, Péter Korondi, Péter Baranyi & Márta Gácsi - 2018 - Frontiers in Psychology 9:388957.
    A special area of human-machine interaction, the expression of emotions gains importance with the continuous development of artificial agents such as social robots or interactive mobile applications. We developed a prototype version of an abstract emotion visualization agent to express five basic emotions and a neutral state. In contrast to well-known symbolic characters (e.g., smileys) these displays follow general biological and ethological rules. We conducted a multiple questionnaire study on the assessment of the displays with Hungarian and Japanese (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  53
    The Puzzle of Evaluating Moral Cognition in Artificial Agents.Madeline G. Reinecke, Yiran Mao, Markus Kunesch, Edgar A. Duéñez-Guzmán, Julia Haas & Joel Z. Leibo - 2023 - Cognitive Science 47 (8):e13315.
    In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like‐for‐like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Meaning in Artificial Agents: The Symbol Grounding Problem Revisited.Dairon Rodríguez, Jorge Hermosillo & Bruno Lara - 2012 - Minds and Machines 22 (1):25-34.
    The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  91
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  32.  34
    How should artificial agents make risky choices on our behalf?Johanna Thoma - 2021 - LSE Philosophy Blog.
  33.  5
    Can Artificial Agents be Authors?João Vitor Schmidt - 2025 - Philosophy and Technology 38 (1):1-25.
    Current Generative Artificial Intelligence models have become incredibly close to the human level of linguistic and artistic excellence, defying our conception of artworks as uniquely human products, resulting in an authorship problem, i.e., whether artificial agents can be regarded as genuine authors of their products. This paper provides a definition of institutional authorship to evaluate this possibility, using John Searle’s Speech Act Theory and Theory of Institutions. To apply the definition, we focus on artistic cases, assuming the institutional (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  40
    ConsScale: A pragmatic scale for measuring the level of consciousness in artificial agents.Raul Arrabales, Agapito Ledezma & Araceli Sanchis - 2010 - Journal of Consciousness Studies 17 (3-4):3-4.
    One of the key problems the field of Machine Consciousness is currently facing is the need to accurately assess the potential level of consciousness that an artificial agent might develop. This paper presents a novel artificial consciousness scale designed to provide a pragmatic and intuitive reference in the evaluation of MC implementations. The version of ConsScale described in this work provides a comprehensive evaluation mechanism which enables the estimation of the potential degree of consciousness of most of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  35.  53
    Attributions toward Artificial Agents in a modified Moral Turing Test.Eyal Aharoni, Sharlene Fernandes, Daniel Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias & Victor Crespo - 2024 - Scientific Reports 14 (8458):1-11.
    Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Representation in natural and artificial agents.M. Bickhard - 1999 - In Edwina Taborsky (ed.), Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign. Shaker Verlag. pp. 15--26.
  37. Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel de Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  38.  58
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  39.  10
    Intention reconsideration in artificial agents: a structured account.Fabrizio Cariani - 2025 - Philosophical Studies 182 (1):205-228.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman’s work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40.  46
    Adopting the intentional stance toward natural and artificial agents.Jairo Perez-Osorio & Agnieszka Wykowska - 2020 - Philosophical Psychology 33 (3):369-395.
    In our daily lives, we need to predict and understand others’ behavior in order to navigate through our social environment. Predictions concerning other humans’ behavior usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called ‘adoption of the intentional stance.’ In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture, and human-robot interaction. We propose that adopting the intentional stance might be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  41. Privacy and artificial agents, or, is google reading my email?Samir Chopra & Laurence White - manuscript
    in Proceedings of the International Joint Conference on Artificial Intelligence, 2007.
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  51
    Arguments as Drivers of Issue Polarisation in Debates Among Artificial Agents.Felix Kopecky - 2022 - Journal of Artificial Societies and Social Simulation 25 (1).
    Can arguments and their properties influence the development of issue polarisation in debates among artificial agents? This paper presents an agent-based model of debates with logical constraints based on the theory of dialectical structures. Simulations on this model reveal that the exchange of arguments can drive polarisation even without social influence, and that the usage of different argumentation strategies can influence the obtained levels of polarisation.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Counterpossibles, Functional Decision Theory, and Artificial Agents.Alexander W. Kocurek - 2024 - In Fausto Carcassi, Tamar Johnson, Søren Brinck Knudstorp, Sabina Domínguez Parrado, Pablo Rivas Robledo & Giorgio Sbardolini (eds.), Proceedings of the 24th Amsterdam Colloquium. pp. 218-225.
    Recently, Yudkowsky and Soares (2018) and Levinstein and Soares (2020) have developed a novel decision theory, Functional Decision Theory (FDT). They claim FDT outperforms both Evidential Decision Theory (EDT) and Causal Decision Theory (CDT). Yet FDT faces several challenges. First, it yields some very counterintuitive results (Schwarz 2018; MacAskill 2019). Second, it requires a theory of counterpossibles, for which even Yudkowsky and Soares (2018) and Levinstein and Soares (2020) admit we lack a “full” or “satisfactory” account. Here, I focus on (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  44.  22
    Towards socially-competent and culturally-adaptive artificial agents.Chiara Bassetti, Enrico Blanzieri, Stefano Borgo & Sofia Marangon - 2022 - Interaction Studies 23 (3):469-512.
    The development of artificial agents for social interaction pushes to enrich robots with social skills and knowledge about (local) social norms. One possibility is to distinguish the expressive and the functional orders during a human-robot interaction. The overarching aim of this work is to set a framework to make the artificial agent socially-competent beyond dyadic interaction – interaction in varying multi-party social situations – and beyond individual-based user personalization, thereby enlarging the current conception of “culturally-adaptive”. The core (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  46. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  47.  92
    Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  48. (1 other version)Intention Reconsideration in Artificial Agents: a Structured Account.Fabrizio Cariani - forthcoming - Special Issue of Phil Studies.
    An important module in the Belief-Desire-Intention architecture for artificial agents (which builds on Michael Bratman's work in the philosophy of action) focuses on the task of intention reconsideration. The theoretical task is to formulate principles governing when an agent ought to undo a prior committed intention and reopen deliberation. Extant proposals for such a principle, if sufficiently detailed, are either too task-specific or too computationally demanding. I propose that an agent ought to reconsider an intention whenever some (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  50. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   46 citations  
1 — 50 / 954