Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human Factors Influencing Trust in Healthcare Artificial Intelligence: Systematic Literature Review
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Occupational ApplicationsThe adoption of AI in healthcare depends on calibrated trust-trust that matches the system's reliability and context. This review shows that clinicians value workload reduction, explainability, and alignment with clinical judgment, while patients emphasize transparency, fairness, and human-like interaction. Yet, trust is not automatic: performance gains may fail if AI undermines professional autonomy, and explainability reassures novices more than experts in high-stakes tasks. For occupational applications, AI must be designed to reduce cognitive burden, respect user expertise, and adapt to domain-specific needs. Organizations should invest in usability testing, peer and organizational support, and targeted training to foster informed trust. Regulators should enforce transparency and human oversight standards. Ultimately, calibrated trust, avoiding blind reliance or excessive skepticism, is essential to protect healthcare workers and patients while ensuring AI strengthens decision-making and safety.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.