Development Practices of Trusted AI Systems among Canadian Data Scientists
DOI :
https://doi.org/10.29173/irie377Mots-clés :
Artificial Intelligence, Data Science, Explainability, Fairness, Machine Learning, TrustRésumé
The introduction of Artificial Intelligence (AI) systems has demonstrated impeccable potential and benefits to enhance the decision-making processes in our society. However, despite the successful performance of AI systems to date, skepticism and concern remain regarding whether AI systems could form a trusting relationship with human users. Developing trusted AI systems requires careful consideration and evaluation of its reproducibility, interpretability, and fairness, which in in turn, poses increased expectations and responsibilities for data scientists. Therefore, the current study focused on understanding Canadian data scientists’ self-confidence in creating trusted AI systems, while relying on their current AI system development practices.
Téléchargements
Publié-e
Comment citer
Numéro
Rubrique
Licence
Under the CC-BY 4.0 license, you have the right to:
Share — copy and redistribute the material in any medium or format
Adapt — remix, transform, and build upon the material for any purpose, even commercially.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.