Why Does AI Companionship Go Wrong?

Auteurs-es

DOI :

https://doi.org/10.29173/irie526

Mots-clés :

Artificial Intelligence, language models, AI companions

Résumé

AI companions, powered by advanced language models, offer personalised interactions and emotional support, but their increasing prevalence raises significant ethical concerns. This paper examines the complex interplay of factors contributing to the potential negative impacts of AI companions in a case study. This author further argues that the root of the negative impacts comes from insufficient user screening that may expose vulnerable individuals to unsuitable AI interactions, regulatory frameworks struggling to keep pace with rapid technological advancements, and a lack of clear distinction between inherent AI limitations and temporary developmental artifacts. This paper aims to provide insights for responsible AI development, and calls for robust user screening protocols, adaptive regulatory frameworks and more informed research mindsets.

Références

Al-Obaydi, L. H., Shakki, F., Tawafak, R. M., Pikhart, M., & Ugla, R. L. (2023). What I know, what I want to know, what I learned: Activating EFL college students' cognitive, behavioral, and emotional engagement through structured feedback in an online environment. Frontiers in Psychology, 13, 1083673.

Bardhan, A. (2022, January 18). Men are creating AI girlfriends and then verbally abusing them. Futurism. https://futurism.com/chatbot-abuse.

Coghlan, S., Leins, K., Sheldrick, S., Cheong, M., Gooding, P., & D'Alfonso, S. (2023). To chat or bot to chat: Ethical issues with using chatbots in mental health. Digital health, 9, 20552076231183542.

De Freitas, J., Uğuralp, A. K., Oğuz‐Uğuralp, Z., & Puntoni, S. (2023). Chatbots and mental health: insights into the safety of generative AI. Journal of Consumer Psychology.

Dignum, V. (2019). Responsible Artificial intelligence: How to develop and use AI in a responsible way. https://link.springer.com/content/pdf/10.1007/978-3-030-30371-6.pdf.

Dighum, V. (2020). Responsibility and Artificial Intelligence. The Oxford Handbook of Ethics of AI. M. D. Dubber. Oxford, Oxford University Press.

Gallese, C. (2022, July). Legal issues of the use of chatbot apps for mental health support. In International Conference on Practical Applications of Agents and Multi-Agent Systems (pp. 258-267). Cham: Springer International Publishing.

Köbis, N., Bonnefon, J., & Rahwan, I. (2021). Bad machines corrupt good morals. Nature Human Behaviour, 5(6), 679–685. https://doi.org/10.1038/s41562-021-01128-2.

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., & Floridi, L. (2021). Operationalising AI ethics: barriers, enablers and next steps. AI & Society, 38(1), 411–423. https://doi.org/10.1007/s00146-021-01308-8.

Nagata, R., Hashiguchi, T., & Sadoun, D. (2020). Is the Simplest Chatbot Effective in English Writing Learning Assistance?. In Computational Linguistics: 16th International Conference of the Pacific Association for Computational Linguistics, PACLING 2019, Hanoi, Vietnam, October 11–13, 2019, Revised Selected Papers 16 (pp. 245-256). Springer Singapore.

Téléchargements

Publié-e

2024-10-19

Comment citer

Gao, Ziwei. 2024. « Why Does AI Companionship Go Wrong? ». The International Review of Information Ethics 34 (1). Edmonton, Canada. https://doi.org/10.29173/irie526.