AI-enhanced coding in bioinformatics: legal and ethical considerations
Keywords:
ChatGPT, artificial intelligence, generative artificial intelligence, ethics, large language models, health data, life-science research, bioinformaticsAbstract
In this article, we describe a case study that explores the ethical and legal issues relating to the introduction of Artificial Intelligence (AI)-enhanced coding in bioinformatics and related life science research. Using this case study, we highlight the potential dangers posed by the introduction of AI-assisted coding in programming and analysis of health data. The aim is to understand and consider the potential harms it poses and to help students and young researchers on how to use AI responsibly in their work. Recent developments in generative artificial intelligence (Gen AI) and the emergence of Large Language Models (LLMs)-based chatbots such as Chat Generative Pre-Trained Transformer (ChatGPT) launched by OpenAI on November 30, 2022 are currently matter of many debates mainly about AI-generated scientific research and publications. Several scientists, editors, and publishers disapproved that ChatGPT made its way into scientific production by being listed as a co-author. Programming is another domain where LLMs-based chatbots have proven immense potential. These AI systems have the ability to assist human programmers at different stages, including writing and debugging code. The rapid development of AI and related emerging technologies and its wide deployment in different domains, including life-science research gives rise to multiple ethical, legal and technical considerations. We designed the present case study to describe a plausible situation in biomedical research, and to elucidate some legal and ethical issues resulting from the introduction of AI in life-science research.
References
Kant, Immanuel. 1948. Moral Law: Groundwork of the Metaphysics of Morals: New York: Routledge.
ICMJE. 2024. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. https://www.icmje.org/icmje-recommendations.pdf
Maouche, Seraya. 2019. “Google AI: Opportunities, Risks, and Ethical Challenges”. Journal of Contemporary French and Francophone Studies 23:4, 447-455.
Sadiku, Matthew N. O.; Ashaolu, Tolulope J.; Ajayi-Majebi, Majebi; Musa, Sarhan M. 2021. “Augmented Intelligence”. International Journal of Scientific Advances (IJSCIA), 2/Issue 5: 772-776.
Singhal, Aditya, Neveditsin, Nikita, Tanveer, Hasnaat, Mago, Vijay. 2024. “Toward Fairness, Accountability, Transparency, and Ethics in AI for Social Media and Health Care: Scoping Review”. JMIR Med Inform. 12:e50048.
“There are holes in Europe's AI Act - and researchers can help to fill them”. 2024. Nature. 625(7994):216.
Thorp, Holden. 2023. “ChatGPT is fun, but not an author”. Science. 379(6630):313.
UNESCO’s Ethics of AI Recommendation. 2021. https://unesdoc.unesco.org/ark:/48223/pf0000380455
Zelin, Pan; Xie, Zhendong; Liu, Tingting; Xia, Tiansheng. 2024. “Exploring the Key Factors Influencing College Students’ Willingness to Use AI Coding Assistant Tools: An Expanded Technology Acceptance Model”. Systems, 2024, 12, 176.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Seraya Maouche
This work is licensed under a Creative Commons Attribution 4.0 International License.
Under the CC-BY 4.0 license, you have the right to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.