Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Clinical Considerations for Trustworthy AI in Oncologic Imaging
0
Zitationen
10
Autoren
2025
Jahr
Abstract
Abstract The integration of artificial intelligence (AI) into medical imaging has the potential to revolutionise diagnostics and patient care. However, ensuring trust in AI-driven solutions remains a critical challenge. This chapter, written from the healthcare provider’s perspective and by Artificial Intelligence for Health Imaging Network (AI4HI) experts, explores the key aspects of trustworthy AI in oncologic imaging and is structured around several fundamental themes. We begin with an introduction to AI decision support systems in routine clinical practice, outlining both their potential benefits and the concerns that must be addressed. A brief case study illustrates real-world applications and challenges encountered in deploying AI in medical settings. Next, we delve into trustworthy cancer imaging AI solutions, focusing on the role of trust in medicine. Several factors influence confidence in AI for cancer imaging, including stakeholder involvement (clinicians and patient representatives), technology development (design, data collection, algorithm training, and validation), and robust technology assessment. Clinical validity, user experience, robustness, explainability, generalisability, and adherence to AI4HI practices are crucial for ensuring reliable performance and user acceptance. The paper then addresses the transfer of AI solutions from development to clinical practice, examining the clinical gap AI seeks to fill. Regulatory approval, legal, and ethical aspects play a pivotal role in adoption, requiring compliance with established standards. Successful integration into clinical workflows necessitates evaluating individual AI solutions, utilising orchestrators, conducting local validation, adapting technology, and ensuring adequate training for healthcare professionals. Finally, we emphasise the importance of quality management, continuous monitoring, and improvement to maintain trust. Adapting to changing circumstances, implementing structured quality audits, and establishing update strategies are necessary to ensure AI solutions remain relevant and effective. Through interdisciplinary collaboration and adherence to regulatory, ethical, and technological best practices, AI in medical imaging can be developed and implemented in a way that fosters trust and improves patient outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.697 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.602 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.127 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.872 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Gdańsk Medical University(PL)
- Royal Marsden Hospital(GB)
- Berlin Institute of Health at Charité - Universitätsmedizin Berlin(DE)
- Charité - Universitätsmedizin Berlin(DE)
- Candiolo Cancer Institute(IT)
- Umeå University(SE)
- Leitat Technological Center(ES)
- Instituto de Investigación Sanitaria La Fe(ES)
- Radboud University Nijmegen(NL)
- Radboud University Medical Center(NL)
- Institució Catalana de Recerca i Estudis Avançats(ES)
- Universitat de Barcelona(ES)
- Consolidated Contractors Company (Greece)(GR)