Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Systematic literature review on the application of explainable artificial intelligence in palliative care studies
9
Zitationen
3
Autoren
2025
Jahr
Abstract
Given the critical role of AI-driven decisions in patient care, adopting XAI techniques is essential for fostering trust and usability. Although progress has been made, significant gaps persist. A main challenge remains the trade-off between model performance and interpretability, as highly accurate models often lack the transparency required to build trust in clinical settings. Additionally, complex models frequently provide inadequate explanations for their outputs, lack consistent documentation, and have limited XAI applications, reducing the interpretability of machine learning studies for clinicians and decision-makers.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.