Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating chain of thought and explainable AI in BERT-based deep learning for interpretable medical diagnosis
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The lack of transparency in Artificial Intelligence (AI) systems raises serious concerns in the medical field, where AI models are often perceived by healthcare practitioners as systems that are difficult to interpret. This study aims to evaluate the integration of the Chain of Thought (CoT) approach and Explainable AI (XAI) within a text-based deep learning model to enhance the interpretability of medical diagnoses. The model was developed using the "prajjwal1/bert-medium" Transformer architecture and designed to classify diagnoses based on patient complaints and electronic medical record entries. Model training employed FocalSmoothingLoss as the loss function, AdamW optimization, and learning rate adjustment through the ReduceLROnPlateau algorithm. The CoT implementation involved constructing step-by-step reasoning logs that mimic human clinical thinking processes, while XAI methods such as attention visualization, LIME, and SHAP were applied to interpret the contribution of each input feature to the final prediction. Analysis results demonstrate that the model can systematically explain symptom detection, semantic analysis, and the final diagnostic decision-making process. The consistency between CoT reasoning, attention distribution, and LIME and SHAP visualizations further reinforces the validity of the generated interpretations. The model achieved a macro accuracy of 82%, with reasoning outputs that can be clinically audited by medical professionals. This study contributes to enhancing user trust in medical AI systems by providing a strong interpretative framework. The integration of CoT and XAI not only clarifies the prediction process but also promotes the development of text-based diagnostic systems that are more accountable, adaptive, and ethical for real-world clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.463 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.314 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.