When Is a Robot a Moral Agent?

Authors

  • John P Sullins lll

DOI:

https://doi.org/10.29173/irie136

Abstract

In this paper I argue that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.

Downloads

Published

2006-12-01

How to Cite

Sullins lll, John P. 2006. “When Is a Robot a Moral Agent?”. The International Review of Information Ethics 6 (December). Edmonton, Canada:23-30. https://doi.org/10.29173/irie136.