Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Decoding the “black-box”: explainable artificial intelligence towards trustworthy advancement in respiratory medicine
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is increasingly applied in respiratory medicine, offering potential advances in diagnostics, treatment guidance and patient monitoring. However, widespread clinical adoption remains limited due to the opaque "black-box" nature of many algorithms, which challenges clinicians' trust and hinders integration into routine practice. Explainable AI (XAI; methods and frameworks that render AI outputs interpretable and transparent) has emerged as a promising approach. By providing insights into algorithmic reasoning alongside predictive performance, XAI can support clinician evaluation, facilitate informed decision-making, and enhance accountability in patient care. This Viewpoint discusses the potential applications of XAI across respiratory medicine, highlighting its role in improving transparency, fostering clinician engagement and supporting integration of AI into clinical workflows. Beyond technical considerations, successful adoption of XAI requires cultural and educational shifts, including training programmes, interdisciplinary collaboration, patient engagement, and adherence to ethical and regulatory standards. XAI also holds potential in supporting shared decision-making, translating complex algorithmic outputs into understandable information for patients. By bridging advanced computational tools with clinical reasoning, XAI may help respiratory medicine move towards responsible, patient-centred and transparent AI implementation. Continued research, education, and collaboration are essential to realise its potential and ensure AI serves as a reliable partner in delivering high-quality respiratory care.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.562 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.298 Zit.
"Why Should I Trust You?"
2016 · 14.384 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.