Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI)
2025·22 Zitationen·Langenbeck s Archives of SurgeryOpen Access
Volltext beim Verlag öffnen22
Zitationen
4
Autoren
2025
Jahr
Abstract
Transparency and interpretability are essential for the effective integration of AI models into clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.436 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.256 Zit.
"Why Should I Trust You?"
2016 · 14.294 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.133 Zit.
Autoren
Institutionen
- Heidelberg University(DE)
- University Hospital Heidelberg(DE)
- National Center for Tumor Diseases(DE)
- University of Basel(CH)
- University Hospital of Basel(CH)
- University Hospital Carl Gustav Carus(DE)
- Technische Universität Dresden(DE)
- Turing Institute(GB)
- University of California, Los Angeles(US)
- University of Cambridge(GB)
- Bridge University(SS)
- The Alan Turing Institute(GB)
- Artificial Intelligence in Medicine (Canada)(CA)
Themen
Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education