Development Practices of Trusted AI Systems among Canadian Data Scientists

Auteurs-es

  • Jinnie Shin Centre for Research in Applied Measurement and Evaluation, University of Alberta
  • Okan Bulut Centre for Research in Applied Measurement and Evaluation, University of Alberta
  • Mark J. Gierl Centre for Research in Applied Measurement and Evaluation, University of Alberta

DOI :

https://doi.org/10.29173/irie377

Mots-clés :

Artificial Intelligence, Data Science, Explainability, Fairness, Machine Learning, Trust

Résumé

The introduction of Artificial Intelligence (AI) systems has demonstrated impeccable potential and benefits to enhance the decision-making processes in our society. However, despite the successful performance of AI systems to date, skepticism and concern remain regarding whether AI systems could form a trusting relationship with human users. Developing trusted AI systems requires careful consideration and evaluation of its reproducibility, interpretability, and fairness, which in in turn, poses increased expectations and responsibilities for data scientists. Therefore, the current study focused on understanding Canadian data scientists’ self-confidence in creating trusted AI systems, while relying on their current AI system development practices.

Téléchargements

Publié-e

2020-06-30

Comment citer

Shin, Jinnie, Okan Bulut, et Mark J. Gierl. 2020. « Development Practices of Trusted AI Systems Among Canadian Data Scientists ». The International Review of Information Ethics 28 (juin). Edmonton, Canada. https://doi.org/10.29173/irie377.