When Is a Robot a Moral Agent?
DOI:
https://doi.org/10.29173/irie136Abstract
In this paper I argue that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.Downloads
Published
2006-12-01
How to Cite
Sullins lll, John P. 2006. “When Is a Robot a Moral Agent?”. The International Review of Information Ethics 6 (December). Edmonton, Canada:23-30. https://doi.org/10.29173/irie136.
Issue
Section
Article
License
Under the CC-BY 4.0 license, you have the right to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.