Conversational Agents and Personal Privacy Harms Case Study
DOI:
https://doi.org/10.29173/irie536Keywords:
Artificial intelligence, privacy, conversational agentAbstract
This fictional case study examines the question of whether a personal conversational agent/advisor called iSoph, encoding extensive personal information and having a fluent natural language interface, may raise privacy harms that are normally thought to attend to personal relationships, as opposed to harms associated with institutional databases and big data analytics. The case stipulates that (i) iSoph collects extensive personal data about its users from multiple, multimodal sources, (ii) can make inferences from this data in combination with its models, but (ii) cannot share information with its developer or any third party. It is shown that despite condition (iii), iSoph raises privacy risks. These privacy risks are of the type associated with personal relationships and direct observation. iSoph raises these risks because, as an advanced conversational agent, it is able to evoke anthropomorphizing responses from its users in ways that they are not fully conscious of or able to control.
References
N/A
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Norman Mooradian
This work is licensed under a Creative Commons Attribution 4.0 International License.
Under the CC-BY 4.0 license, you have the right to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.