Should moral decisions be different for human and artificial cognitive agents?
نویسندگان
چکیده
Moral judgments are elicited using dilemmas presenting hypothetical situations in which an agent must choose between letting several people die or sacrificing one person in order to save them. The evaluation of the action or inaction of a human agent is compared to those of two artificial agents – a humanoid robot and an automated system. Ratings of rightness, blamefulness and moral permissibility of action or inaction in incidental and instrumental moral dilemmas are used. The results show that for the artificial cognitive agents the utilitarian action is rated as more morally permissible than inaction. The humanoid robot is found to be less blameworthy for his choices compared to the human agent or to the automated system. Action is found to be more appropriate, morally permissible, more right, and less blameworthy than inaction only for the incidental scenarios. The results are interpreted and discussed from the perspective of perceived moral agency.
منابع مشابه
Cognitive Process of Moral Decision-Making for Autonomous Agents
There are a great variety of theoretical models of cognition whose main purpose is to explain the inner workings of the human brain. Researchers from areas such as neuroscience, psychology, and physiology have proposed these models. Nevertheless, most of these models are based on empirical studies and on experiments with humans, primates, and rodents. In fields such as cognitive informatics and...
متن کاملShould Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents
Moral dilemmas are used to study the situations in which there is a conflict between two moral rules: e.g. is it permissible to kill one person in order to save more people. In standard moral dilemmas the protagonist is a human. However, the recent progress in robotics leads to the question of how artificial cognitive agents should act in situations involving moral dilemmas. Here, we study mora...
متن کاملA Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of thi...
متن کاملConstrained Incrementalist Moral Decision Making for a Biologically Inspired Cognitive Architecture
The field of machine ethics has emerged in response to the development of autonomous artificial agents with the ability to interact with human beings, or to produce changes in the environment which can affect humans (Allen, Varner, & Zinser, 2000). Such agents, whether physical (robots) or virtual (software agents) need a mechanism for moral decision making in order to ensure that their actions...
متن کاملCognitive Criteria for the Moral Solution of the False Brothers “Independence” and “Despotism” of Judges Based on Religious Sources
The independence of judges in arbitration is an important process that is mentioned in the one hundred and fifty-sixth article of the Constitution. The main issue of this research is the moral solution of the false brothers of independence and despotism in judgment. Judges sometimes become confused between the two when making decisions and sentencing, which can only be resolved by recognizing t...
متن کامل