Development Practices of Trusted AI Systems among Canadian Data Scientists

Authors

  • Jinnie Shin Centre for Research in Applied Measurement and Evaluation, University of Alberta
  • Okan Bulut Centre for Research in Applied Measurement and Evaluation, University of Alberta
  • Mark J. Gierl Centre for Research in Applied Measurement and Evaluation, University of Alberta

DOI:

https://doi.org/10.29173/irie377

Keywords:

Artificial Intelligence, Data Science, Explainability, Fairness, Machine Learning, Trust

Abstract

The introduction of Artificial Intelligence (AI) systems has demonstrated impeccable potential and benefits to enhance the decision-making processes in our society. However, despite the successful performance of AI systems to date, skepticism and concern remain regarding whether AI systems could form a trusting relationship with human users. Developing trusted AI systems requires careful consideration and evaluation of its reproducibility, interpretability, and fairness, which in in turn, poses increased expectations and responsibilities for data scientists. Therefore, the current study focused on understanding Canadian data scientists’ self-confidence in creating trusted AI systems, while relying on their current AI system development practices.

Downloads

Published

2020-06-30

How to Cite

Shin, Jinnie, Okan Bulut, and Mark J. Gierl. 2020. “Development Practices of Trusted AI Systems Among Canadian Data Scientists”. The International Review of Information Ethics 28 (June). Edmonton, Canada. https://doi.org/10.29173/irie377.