Results for ' types of agents, class of relevant agents ‐ human persons, robots or artificial agents'

968 found
Order:
  1. Kantian Ethics in the Age of Artificial Intelligence and Robotics.Ozlem Ulgen - 2017 - Questions of International Law 1 (43):59-83.
    Artificial intelligence and robotics is pervasive in daily life and set to expand to new levels potentially replacing human decision-making and action. Self-driving cars, home and healthcare robots, and autonomous weapons are some examples. A distinction appears to be emerging between potentially benevolent civilian uses of the technology (eg unmanned aerial vehicles delivering medicines), and potentially malevolent military uses (eg lethal autonomous weapons killing human com- batants). Machine-mediated human interaction challenges the philosophical basis of (...) existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? -/- Whilst Kant may be familiar to international lawyers for setting restraints on the use of force and rules for perpetual peace, his foundational work on ethics provides an inclusive moral philosophy for assessing ethical conduct of individuals and states and, thus, is relevant to discussions on the use and development of artificial intelligence and robotics. His philosophy is inclusive because it incorporates justifications for morals and legitimate responses to immoral conduct, and applies to all human agents irrespective of whether they are wrongdoers, unlawful combatants, or unjust enemies. Humans are at the centre of rational thinking, action, and norm-creation so that the rationale for restraints on methods and means of warfare, for example, is based on preserving human dignity as well as ensuring conditions for perpetual peace among states. Unlike utilitarian arguments which favour use of autonomous weapons on the basis of cost-benefit reasoning or the potential to save lives, Kantian ethics establish non-consequentialist and deontological rules which are good in themselves to follow and not dependent on expediency or achieving a greater public good. -/- Kantian ethics make two distinct contributions to the debate. First, they provide a human-centric ethical framework whereby human exist- ence and capacity are at the centre of a norm-creating moral philosophy guiding our understanding of moral conduct. Second, the ultimate aim of Kantian ethics is practical philosophy that is relevant and applicable to achieving moral conduct. -/- I will seek to address the moral questions outlined above by exploring how core elements of Kantian ethics relate to use of artificial intelli- gence and robotics in the civilian and military spheres. Section 2 sets out and examines core elements of Kantian ethics: the categorical imperative; autonomy of the will; rational beings and rational thinking capacity; and human dignity and humanity as an end in itself. Sections 3-7 consider how these core elements apply to artificial intelligence and robotics with discussion of fully autonomous and human-machine rule-generating approaches; types of moral reasoning; the difference be- tween ‘human will’ and ‘machine will’; and respecting human dignity. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  2. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  5.  59
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  42
    Naturally occurring gestures in a human–robot teaching scenario.Nuno Otero, Chrystopher L. Nehaniv, Dag Sverre Syrdal & Kerstin Dautenhahn - 2008 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 9 (3):519-550.
    This paper describes our general framework for the investigation of how human gestures can be used to facilitate the interaction and communication between humans and robots. Two studies were carried out to reveal which “naturally occurring” gestures can be observed in a scenario where users had to explain to a robot how to perform a home task. Both studies followed a within-subjects design: participants had to demonstrate how to lay a table to a robot using two different methods (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  8. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Https://Orcidorg Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  9.  85
    Frankena on Environmental Ethics.Paul W. Taylor - 1981 - The Monist 64 (3):313-324.
    In his article “Ethics and the Environment” William K. Frankena distinguishes eight types of ethical theories which could generate moral rules and/or judgments concerning how rational agents should act with regard to the natural environment. The eight types are differentiated by their conceptions of moral subjects or patients. Each has its own view of the class of entities with respect to which moral agents can have duties and responsibilities. The eight types may be briefly (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  10. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  11.  99
    Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.Eva Wiese, Giorgio Metta & Agnieszka Wykowska - 2017 - Frontiers in Psychology 8:281017.
    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to inter-act with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  12.  45
    The role of social eye-gaze in children’s and adults’ ownership attributions to robotic agents in three cultures.Patricia Kanngiesser, Shoji Itakura, Yue Zhou, Takayuki Kanda, Hiroshi Ishiguro & Bruce Hood - 2015 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 16 (1):1-28.
    Young children often treat robots as social agents after they have witnessed interactions that can be interpreted as social. We studied in three experiments whether four-year-olds from three cultures and adults from two cultures will attribute ownership of objects to a robot that engages in social gaze with a human. Participants watched videos of robot-human interactions, in which objects were possessed or new objects were created. Children and adults applied the same ownership rules to humans and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  14.  72
    Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types.Uwe Klein, Jana Depping, Laura Wohlfahrt & Pantaleon Fassbender - 2024 - AI and Society 39 (5):2445-2456.
    Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  15. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   75 citations  
  17. Comparative legal study on privacy and personal data protection for robots equipped with artificial intelligence: looking at functional and technological aspects.Kaori Ishii - 2019 - AI and Society 34 (3):509-533.
    This paper undertakes a comparative legal study to analyze the challenges of privacy and personal data protection posed by Artificial Intelligence embedded in Robots, and to offer policy suggestions. After identifying the benefits from various AI usages and the risks posed by AI-related technologies, I then analyze legal frameworks and relevant discussions in the EU, USA, Canada, and Japan, and further consider the efforts of Privacy by Design originating in Ontario, Canada. While various AI usages provide great (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  70
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  19.  57
    Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  20.  71
    Robotic Animism: The Ethics of Attributing Minds and Personality to Robots with Artificial Intelligence.Sven Nyholm - 2022 - In Tiddy Smith (ed.), Animism and Philosophy of Religion. Springer Verlag. pp. 313-340.
    In this chapter, I use the expression “robotic animism” to refer to the tendency that many people have to interact with robots as if the robots have minds or a personality. I compare the idea of robotic animism with what philosophers and psychologists sometimes refer to as “mind-reading”, as it relates to human interaction with robots. The chapter offers various examples of robotic animism and mind-reading within different forms of human-robot interaction, and it also considers (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Artificial agents - personhood in law and philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  22.  28
    Privacy-centered design for social robots.Tanja Heuer, Ina Schiering & Reinhard Gerndt - 2019 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 20 (3):509-529.
    Social robots as companions play an increasingly important role in our everyday life. However, reaching the full potential of social robots and the interaction between humans and robots requires permanent collection and processing of personal data of users, e.g. video and audio data for image and speech recognition. In order to foster user acceptance, trust and to address legal requirements as the General Data Protection Regulation of the EU, privacy needs to be integrated in the design process (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory (...)
     
    Export citation  
     
    Bookmark   82 citations  
  24.  69
    Social robots as depictions of social agents.Herbert H. Clark & Kerstin Fischer - 2023 - Behavioral and Brain Sciences 46:e21.
    Social robots serve people as tutors, caretakers, receptionists, companions, and other social agents. People know that the robots are mechanical artifacts, yet they interact with them as if they were actual agents. How is this possible? The proposal here is that people construe social robots not as social agentsper se, but asdepictionsof social agents. They interpret them much as they interpret ventriloquist dummies, hand puppets, virtual assistants, and other interactive depictions of people and animals. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  22
    Towards An Acronym for Organisational Ethics: Using a Quasi-person Model to Locate Responsible Agents in Collective Groups.David Ardagh - 2017 - Philosophy of Management 16 (2):137-160.
    Organisational Ethics could be more effectively taught if organisational agency could be better distinguished from activity in other group entities, and defended against criticisms. Some criticisms come from the side of what is called “methodological individualism”. These critics argue that, strictly speaking, only individuals really exist and act, and organisations are not individuals, real things, or agents. Other criticisms come from fear of the possible use of alleged “corporate personhood” to argue for a possible radical expansion of corporate rights (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  27
    Biologically Inspired Emotional Expressions for Artificial Agents.Beáta Korcsok, Veronika Konok, György Persa, Tamás Faragó, Mihoko Niitsuma, Ádám Miklósi, Péter Korondi, Péter Baranyi & Márta Gácsi - 2018 - Frontiers in Psychology 9:388957.
    A special area of human-machine interaction, the expression of emotions gains importance with the continuous development of artificial agents such as social robots or interactive mobile applications. We developed a prototype version of an abstract emotion visualization agent to express five basic emotions and a neutral state. In contrast to well-known symbolic characters (e.g., smileys) these displays follow general biological and ethological rules. We conducted a multiple questionnaire study on the assessment of the displays with Hungarian (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Trusting the (ro)botic other: By assumption?Paul B. de Laat - 2015 - SIGCAS Computers and Society 45 (3):255-260.
    How may human agents come to trust (sophisticated) artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet (ro)bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29.  21
    Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs.Laura Moradbakhti, Simon Schreibelmayr & Martina Mara - 2022 - Frontiers in Psychology 13.
    Artificial Intelligence is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs, namely autonomy, competence, and relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  1
    Space Foes: Robot as A Revolutionary Subject.Alexander Pavlov - 2017 - Sociology of Power 29 (2):116-132.
    The concept of revolution remains relevant both for social-political debates and for academic studies. But at the same time many left thinkers as well as Neo-Marxist theorists have some problems with this concept, as it is nolonger possible to reflect on revolution in terms of laws of history. For this reason, supporters of revolution consider revolution as a kind of utopian condition. Another difficulty is connected to the fact that today it is no longer possible to make a bet (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  38
    The Epistemological Foundations of Artificial Agents.Nick J. Lacey & M. H. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  26
    Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online.Atte Oksanen, Nina Savela, Rita Latikka & Aki Koivula - 2020 - Frontiers in Psychology 11.
    Robotization and artificial intelligence are expected to change societies profoundly. Trust is an important factor of human–technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that were described (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  34.  97
    Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  35.  5
    The Effects of Artificial Intelligence and Modern Technology on Commercial Transactions for Commercial Transactions Law 2023.Adel Salem AlLouzi, Karima Krim & Mohammad Abdalhafid AlKhamaiseh - forthcoming - Evolutionary Studies in Imaginative Culture:635-652.
    In light of the Fourth Industrial Revolution, the intervention of artificial intelligence in commercial transactions has expanded, and it has not remained a mere subject or subject of the contract, whether it is a material or moral product, but has gone beyond that to have a fundamental and effective role in concluding the contract as an electronic agent that makes the contract automated and concluded in whole or in part in an automated manner without human intervention. The UAE (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  34
    Can Merging a Capability Approach with Effectual Processes Help Us Define a Permissible Action Range for AI Robotics Entrepreneurship?Yuko Kamishima, Bart Gremmen & Hikari Akizawa - 2018 - Philosophy of Management 17 (1):97-113.
    In this paper, we first enumerate the problems that humans might face with a new type of technology such as robots with artificial intelligence (AI robots). Robotics entrepreneurs are calling for discussions about goals and values because AI robots, which are potentially more intelligent than humans, can no longer be fully understood and controlled by humans. AI robots could even develop into ethically “bad” agents and become very harmful. We consider these discussions as part (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  37. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous (...) in cyberspace, a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of why cyberspace is central to so many of CE's concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent's action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and fort that matter good) but conversely to `receive' or `suffer from' it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  38.  46
    Adopting the intentional stance toward natural and artificial agents.Jairo Perez-Osorio & Agnieszka Wykowska - 2020 - Philosophical Psychology 33 (3):369-395.
    In our daily lives, we need to predict and understand others’ behavior in order to navigate through our social environment. Predictions concerning other humans’ behavior usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called ‘adoption of the intentional stance.’ In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture, and human-robot interaction. We propose that adopting the intentional stance (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  39. (1 other version)Artificial evil and the foundation of computer ethics.L. Floridi & J. Sanders - 2000 - Etica E Politica 2 (2).
    Moral reasoning traditionally distinguishes two types of evil: moral and natural. The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous (...) in cyberspace, a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics. It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective. In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   26 citations  
  40. Robot morali? Considerazioni filosofiche sulla machine ethics.Fabio Fossa - 2020 - Sistemi Intelligenti 2020 (2):425-444.
    The purpose of this essay is to determine the domain of validity of the notions developed in Machine Ethics [ME]. To this aim, I analyse the epistemological and methodological presuppositions that lie at the root of such technological project. On this basis, I then try and develop the theoretical means to identify and deconstruct improper applications of these notions to objects that do not belong to the same epistemic context, focusing in particular on the extent to which ME is supposed (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  59
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  42.  67
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  44.  14
    Exploring the Ethics of Interaction with Care Robots.María Victoria Martínez-López, Gonzalo Díaz-Cobacho, Aníbal M. Astobiza & Blanca Rodríguez López - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 149-167.
    The development of assistive robotics and anthropomorphic AI allows machines to increasingly enter into the daily lives of human beings and gradually become part of their lives. Robots have made a strong entry in the field of assistive behaviour. In this chapter, we will ask to what extent technology can satisfy people’s personal needs and desires as compared to human agents in the field of care. The industry of assistive technology burst out of the gate at (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  22
    Integration of a social robot and gamification in adult learning and effects on motivation, engagement and performance.Anna Riedmann, Philipp Schaper & Birgit Lugrin - forthcoming - AI and Society:1-20.
    Learning is a central component of human life and essential for personal development. Therefore, utilizing new technologies in the learning context and exploring their combined potential are considered essential to support self-directed learning in a digital age. A learning environment can be expanded by various technical and content-related aspects. Gamification in the form of elements from video games offers a potential concept to support the learning process. This can be supplemented by technology-supported learning. While the use of tablets is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  23
    Towards socially-competent and culturally-adaptive artificial agents.Chiara Bassetti, Enrico Blanzieri, Stefano Borgo & Sofia Marangon - 2022 - Interaction Studies 23 (3):469-512.
    The development of artificial agents for social interaction pushes to enrich robots with social skills and knowledge about (local) social norms. One possibility is to distinguish the expressive and the functional orders during a human-robot interaction. The overarching aim of this work is to set a framework to make the artificial agent socially-competent beyond dyadic interaction – interaction in varying multi-party social situations – and beyond individual-based user personalization, thereby enlarging the current conception of “culturally-adaptive”. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  36
    Trust Me on This One: Conforming to Conversational Assistants.Donna Schreuter, Peter van der Putten & Maarten H. Lamers - 2021 - Minds and Machines 31 (4):535-562.
    Conversational artificial agents and artificially intelligent voice assistants are becoming increasingly popular. Digital virtual assistants such as Siri, or conversational devices such as Amazon Echo or Google Home are permeating everyday life, and are designed to be more and more humanlike in their speech. This study investigates the effect this can have on one’s conformity with an AI assistant. In the 1950s, Solomon Asch’s already demonstrated the power and danger of conformity amongst people. In these classical experiments test (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  63
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  49.  22
    Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.Aleksandra Swiderska & Dennis Küster - 2020 - Cognitive Science 44 (7):e12872.
    A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  39
    Two Types of Refutation in Philosophical Argumentation.Catarina Dutilh Novaes - 2022 - Argumentation 36 (4):493-510.
    In this paper, I highlight the significance of practices of _refutation_ in philosophical inquiry, that is, practices of showing that a claim, person or theory is wrong. I present and contrast two prominent approaches to philosophical refutation: refutation in ancient Greek dialectic (_elenchus_), in its Socratic variant as described in Plato’s dialogues, and as described in Aristotle’s logical texts; and the practice of providing counterexamples to putative definitions familiar from twentieth century analytic philosophy, focusing on the so-called Gettier problem. Moreover, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 968