Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trustworthy AI in digital health: a comprehensive review of robustness and explainability
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Ensuring trust in artificial intelligence (AI) systems is essential for the safe and ethical integration of machine learning systems into high-stakes domains such as digital health. Key dimensions, including robustness, explainability, fairness, accountability, and privacy, need to be addressed throughout the AI lifecycle, from problem formulation and data collection to model deployment and human interaction. While various contributions address different aspects of trustworthy AI, a focused synthesis on robustness and explainability, especially tailored to the healthcare context, remains limited. This review addresses that need by organizing recent advancements into an accessible framework, highlighting both technical and practical considerations. We present a structured overview of methods, challenges, and solutions, aiming to support researchers and practitioners in developing reliable and explainable AI (XAI) solutions for digital health. This review article is organized into three main parts. First, we introduce core pillars of trustworthy AI and discuss the technical and ethical challenges they pose, particularly in the context of digital health. Second, we explore application-specific trust considerations across domains such as intensive care, mental health, metabolic disease, and public health surveillance, highlighting how explainability, clinical validation, and human oversight support trust. Lastly, we present recent advancements in techniques aimed at improving robustness under data scarcity and distributional shifts, as well as XAI methods ranging from feature attribution to gradient-based interpretations and counterfactual explanations. This paper is further enriched with detailed discussions of the contributions toward robustness and explainability in digital health, the development of trustworthy AI systems in the era of large language models, and various evaluation metrics for measuring trust and related parameters such as validity, fidelity, and diversity, offering a roadmap for building safer and more reliable AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.672 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.317 Zit.
"Why Should I Trust You?"
2016 · 14.518 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.191 Zit.