Conversational Agents and Personal Privacy Harms Case Study

Authors

DOI:

https://doi.org/10.29173/irie536

Keywords:

Artificial intelligence, privacy, conversational agent

Abstract

This fictional case study examines the question of whether a personal conversational agent/advisor called iSoph, encoding extensive personal information and having a fluent natural language interface, may raise privacy harms that are normally thought to attend to personal relationships, as opposed to harms associated with institutional databases and big data analytics.  The case stipulates that (i) iSoph collects extensive personal data about its users from multiple, multimodal sources, (ii) can make inferences from this data in combination with its models, but (ii) cannot share information with its developer or any third party. It is shown that despite condition (iii), iSoph raises privacy risks. These privacy risks are of the type associated with personal relationships and direct observation. iSoph raises these risks because, as an advanced conversational agent, it is able to evoke anthropomorphizing responses from its users in ways that they are not fully conscious of or able to control.

References

N/A

Downloads

Published

2024-10-20

How to Cite

Mooradian, Norman. 2024. “Conversational Agents and Personal Privacy Harms Case Study”. The International Review of Information Ethics 34 (1). Edmonton, Canada. https://doi.org/10.29173/irie536.