(De)constructing ethics for autonomous cars: A case study of Ethics Pen-Testing towards “AI for the Common Good”
DOI:
https://doi.org/10.29173/irie381Keywords:
Artificial Intelligence, Autonomous Cars, Common Good, Ethics, Pen-TestingAbstract
Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for “Good”. This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, the concept of Ethics Pen-Testing (EPT) identifies challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI.
The current paper reports on a first evaluation of EPT. EPT is applicable to various artefacts that have ethical impact, including designs for or implementations of specific AI technology, and requirements engineering methods for eliciting which ethical settings to build into AI. The current study focused on the latter type of artefact. In four independent sessions, participants with close but varying involvements in “AI and ethics” were asked to deconstruct a method that has been proposed for eliciting ethical values and choices in autonomous car technology, an online experiment modelled on the Trolley Problem.
The results suggest that EPT is well-suited to this task: the remarks made by participants lent themselves well to being structured by the four lead questions of EPT, in particular, regarding the question what the problem is and about which stakeholders define it. As part of the problem definition, the need became apparent for thorough technical domain knowledge in discussions of AI and ethics. Thus, participants questioned the framing and the presuppositions inherent in the experiment and the discourse on autonomous cars that underlies the experiment. They transitioned from discussing a specific AI artefact to discussing its role in wider socio-technical systems.
Results also illustrate to what extent and how the requirements engineering method forces us to not only have a discussion about which values to “build into” AI systems, the substantive building blocks of the Common Good, but also about how we want to have this discussion at all. Thus, it forces us to become explicit about how we conceive of democracy and the constitutional state and the procedural building blocks of the Common Good.
Downloads
Published
How to Cite
Issue
Section
License
Under the CC-BY 4.0 license, you have the right to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.