OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 03:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Framework for bias evaluation in large language models in healthcare settings

2025·18 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

18

Zitationen

9

Autoren

2025

Jahr

Abstract

A critical gap in the adoption of large language models for AI-assisted clinical decisions is the lack of a standardized audit framework to evaluate models for accuracy and bias. Our framework introduces a five-step framework that guides practitioners through stakeholder engagement, model calibration to specific patient populations, and rigorous testing through clinically relevant scenarios. We provide open-access tools for stakeholder engagement and an example of an audit. As the regulation of models becomes more critical, we believe adoption of an audit framework that tests model outputs, rather than regulating specific hyperparameters or inputs, will encourage the responsible use of AI in clinical settings.

Ähnliche Arbeiten