Some Policy Recommendations to Fight Gender and Racial Biases in AI

Authors

  • Galit Wellner Tel Aviv University, Faculty of Humanities

DOI:

https://doi.org/10.29173/irie497

Keywords:

Algorithms, Artificial Intelligence, Gender and Racial Bias, Machine Learning, Transparency

Abstract

Many solutions have been proposed to fight the problem of bias in AI. The paper arranges them into five categories: (a) "no gender or race" - ignoring and omitting any reference to gender and race from the dataset; (b) transparency - revealing the considerations that led the algorithm to reach a certain conclusion; (c) designing algorithms that are not biased; (d) "machine education" that complements "machine learning" by adding value sensitivity to the algorithm; or (e) involving humans in the process. The paper will selectively provide policy recommendations to promote the solutions of transparency (b) and human-in-the-loop (e). For transparency, the policy can be inspired by the measures implemented in the pharmaceutical industry for drug approval. To promote human-in-the-loop, the paper proposes an "ombudsman" mechanism that ensures the biases detected by the users are dealt with by the companies who develop and run the algorithms.

References

Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. “Semantics Derived Automatically from Language Corpora Contain Human-like Biases.” Science 356 (6334): 183–86. https://doi.org/10.1126/science.aal4230.

Clair, Adam. 2017. “Rule by Nobody: Algorithms Update Bureaucracy’s Long-Standing Strategy for Evasion.” Real Life, 2017. https://reallifemag.com/rule-by-nobody/.

D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Data Feminism. The MIT Press. https://doi.org/10.7551/mitpress/11805.001.0001.

Dave, Paresh. 2018. “Fearful of Bias, Google Blocks Gender-Based Pronouns from New AI Tool | Reuters.” Reuters. 2018. https://www.reuters.com/article/us-alphabet-google-ai-gender/fearful-of-bias-google- blocks-gender-based-pronouns-from-new-ai-tool-idUSKCN1NW0EF.

Fisman, Ray, and Michael Luca. 2016. “Fixing Discrimination in Online Marketplaces.” Harvard Business Review 2016 (December): 89–96.

Lomas, Natasha. 2018. “IBM Launches Cloud Tool to Detect AI Bias and Explain Automated Decisions.” TechCrunch, 2018. https://techcrunch.com/2018/09/19/ibm-launches-cloud-tool-to-detect-ai-bias-and- explain-automated-decisions/.

Marcus, Gary. 2018. “The Deepest Problem with Deep Learning.” Medium. 2018. https://medium.com/@GaryMarcus/the-deepest-problem-with-deep-learning-91c5991f5695.

Nallur, Vivek. 2020. “Landscape of Machine Implemented Ethics.” Science and Engineering Ethics 26 (5): 2381–99. https://doi.org/10.1007/s11948-020-00236-y.

O’Neal, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group.

Wellner, Galit. 2020. “When AI Is Gender-Biased: The Effects of Biased AI on the Everyday Experiences of Women.” Humana.Mente 13 (37): 127–50.

———. 2021. “I-Algorithm-Dataset: Mapping the Solutions to Gender Bias in AI.” In Gendered Configurations of Humans and Machines: Interdisciplinary Contributions, edited by Jan Büssers, Anja Faulhaber, Myriam Raboldt, and Rebecca Wiesner, 79–97. Verlag Barbara Budrih.

Wellner, Galit, and Tiran Rothman. 2020. “Feminist AI: Can We Expect Our AI Systems to Become Feminist?” Philosophy and Technology 33 (2): 191–205. https://doi.org/10.1007/s13347-019-00352-z.

Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan. 2019. “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?” Philosophy and Technology 32 (4): 661–83. https://doi.org/10.1007/s13347-018-0330-6.

Zou, James, and Londa Schiebinger. 2018. “Design AI so That Its Fair.” Nature 559 (7714): 324–26. https://doi.org/10.1038/d41586-018-05707-8.

Downloads

Published

2022-11-30

How to Cite

Wellner, Galit. 2022. “Some Policy Recommendations to Fight Gender and Racial Biases in AI”. The International Review of Information Ethics 32 (1). Edmonton, Canada. https://doi.org/10.29173/irie497.