Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
State of the AI: Post-Deployment Monitoring of Radiology-Focused Internally Developed AI
1
Zitationen
24
Autoren
2026
Jahr
Abstract
Articles on the development of medical image artificial intelligence (AI) algorithms are numerous in the literature, but deployment to clinical practice is infrequently discussed. The Enterprise Radiology Framework for AI Software Technology Team at Mayo Clinic has been focused on bridging the gap in clinical translation of medical image AI algorithms since its inception in 2019. During this time, we have released 17 algorithms into our radiology clinical practice. Recently, we have placed an increased focus on monitoring these algorithms, as there are few reports with practical experience documented in the literature. Our increased monitoring efforts include daily, weekly, and yearly monitoring of utilization, failure modes, data drift, and end-user feedback through automated alerts, dedicated dashboards, and pointed investigations to enable optimal algorithmic processing. End-user feedback is elicited yearly during annual reviews to ensure clinical needs are still being met. Automated monitoring has enabled earlier identification of problems, such as images no longer routing through the orchestration engine to the appropriate algorithm, minimizing potential disruption to the clinical practice and ensuring continued algorithmic utilization. Monitoring has also reinforced the importance of key aspects of interdisciplinary research and translation, such as early discussions on clinical needs coupled with technological ability and proper training. By providing our experience in and continuing to improve monitoring methods as a community, we can all minimize risk and maximize the benefits of medical pixel-based AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.
Autoren
- Cole J. Cook
- Jason R. Klug
- Blaize W. Kandler
- Abraham Baez-Suarez
- Adam P. Dachowicz
- Daniel J. Blezek
- Andrew D. Missert
- Gian Marco Conte
- Justin D. Benfield
- Amanda Mensing-Diggs
- Matthew T. Edwards
- Michele A. Powell
- Emily N. Sheedy
- Holly M. Meyer
- Joseph Melnick
- Bryce F. Flor
- David E. Vidal
- Vera Sorin
- B.Selnur Erdal
- Steve G. Langer
- Jeremy D. Collins
- Eric E. Williamson
- Panagiotis Korfiatis
- Timothy L. Kline