Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Socio-technical misalignments: a case study of why AI systems may fail (Preprint)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> There has been growing use of artificial intelligence (AI) in healthcare, with computerised decision support (CDS) tools being one of its use cases. However, there remains limited understanding of why AI systems can fail to be successfully implemented in healthcare settings, particularly when considering their socio-technical context. </sec> <sec> <title>OBJECTIVE</title> This study tracks the implementation of an AI chatbot, ChatAI, in a large tertiary government hospital, to examine its implementation challenges. </sec> <sec> <title>METHODS</title> We employed an instrumental case study approach, utilizing interviews and archival data. We conducted 21 semi-structured interviews with the implementation team and hospital staff who have interacted with ChatAI. Interviews were audio-recorded and transcribed. Socio-technical systems (STS) theory, specifically Davis et al.’s (2014) six-element framework, was used to examine ChatAI’s use and integration within the hospital setting. </sec> <sec> <title>RESULTS</title> Multiple misalignments among Davis et al.’s (2014) six socio-technical elements (goals, people, processes, technology, and infrastructure) limited ChatAI’s user adoption and sustainability. Although the hospital’s innovation center team attempted to address these initial misalignments, contextual changes such as new regulatory mandates, infrastructure changes, and evolving stakeholder practices introduced further first- and second-order misalignments between ChatAI and the hospital – eventually leading to its discontinuation. </sec> <sec> <title>CONCLUSIONS</title> This study highlights how misalignments across socio-technical dimensions in large-scale implementations can undermine the use and sustainability of AI systems. These findings can inform future efforts to implement AI tools in real-world healthcare settings, ensuring better integration with existing organizational infrastructures. </sec> <sec> <title>CLINICALTRIAL</title> N.A. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.