OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 20:59

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Exploring BioClinical BERT’s NLP Capabilities with Explainability Techniques

2024·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2024

Jahr

Abstract

Natural Language Processing has gained significant attention in the world of Information Technology and is experiencing rapid growth. Its applications span across various domains, with a notable surge in its utilization within the healthcare industry. This has fueled the development of language models that are trained on a large amount of clinical data, like Bio+Clinical BERT. However, there remains a crucial concern regarding the trustworthiness of NLP models. Traditional accuracy measures alone may not be sufficient to instill confidence in their reliability. In this study, we address this concern by exploring the application of Explainability to NLP tasks in the healthcare sector. We employ Explainability techniques to gain deeper insights into the underlying mechanisms and evaluate the trustworthiness of fine-tuned models based on Bio+Clinical BERT. Specifically, LIME and SHAP techniques have been utilized to produce explanations, which prove instrumental in enhancing our understanding of the models, as well as identifying their strengths and weaknesses.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Topic Modeling
Volltext beim Verlag öffnen