OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 00:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Explainable and Trustworthy AI for Decision Support in Medicine: An Overview of Methods and Good Practices

2021·1 Zitationen·Aristotle University of ThessalonikiOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2021

Jahr

Abstract

Artificial Intelligence (AI) is defined as intelligence exhibited by machines, such as electronic computers. It can involve reasoning, problem solving, learning and knowledge representation, which are mostly in focus in the medical domain. Other forms of intelligence, including autonomous behavior, are also parts of AI. Data driven methods for decision support have been employed in the medical domain for some time. Machine learning (ML) is used for a wide range of complex tasks across many sectors of the industry. However, a broader spectrum of AI, including deep learning (DL) as well as autonomous agents, have been recently gaining more focus and have risen expectation for solving numerous problems in the medical domain. A barrier towards AI adoption, or rather a concern, is trust in AI, which is often hindered by issues like lack of understanding of a black-box model function, or lack of credibility related to reporting of results. Explainability and interpretability are prerequisites for the development of AI-based systems that are lawful, ethical and robust. In this respect, this paper presents an overview of concepts, best practices, and success stories, and opens the discussion for multidisciplinary work towards establishing trustworthy AI.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Machine Learning in Healthcare
Volltext beim Verlag öffnen