Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust at every step: Embedding trust quality gates into the visual data exploration loop for machine learning-based clinical decision support systems
4
Zitationen
2
Autoren
2025
Jahr
Abstract
Recent advancements in machine learning (ML) support novel applications in healthcare, most significantly clinical decision support systems (CDSS). The lack of trust hinders acceptance and is one of the main reasons for the limited number of successful implementations in clinical practice. Visual analytics enables the development of trustworthy ML models by providing versatile interactions and visualizations for both data scientists and healthcare professionals (HCPs). However, specific support for HCPs to build trust towards ML models through visual analytics remains underexplored. We propose an extended visual data exploration methodology to enhance trust in ML-based healthcare applications. Based on a literature review on trustworthiness of CDSS, we analyze emerging themes and their implications. By introducing trust quality gates mapped onto the Visual Data Exploration Loop, we provide structured checkpoints for multidisciplinary teams to assess and build trust. We demonstrate the applicability of this methodology in three real-world use cases – policy development, plausibility testing, and model optimization – highlighting its potential to advance trustworthy ML in the healthcare domain. • Literature review reveals common trust aspects during CDSS model development. • Emerging themes include privacy, consistency, fairness, flexibility and autonomy. • Proposed framework relates trustworthiness aspects to trust building measures. • Outline of integration into the CDSS development process via trust quality gates. • Case studies illustrate instantiations of the proposed framework.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.314 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.684 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.411 Zit.