Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
DiversityMedQA: A Benchmark for Assessing Demographic Biases in Medical Diagnosis using Large Language Models
7
Zitationen
7
Autoren
2024
Jahr
Abstract
As large language models (LLMs) gain traction in healthcare, concerns about their susceptibility to demographic biases are growing.We introduce DiversityMedQA 1 , a novel benchmark designed to assess LLM responses to medical queries across diverse patient demographics, such as gender and ethnicity.By perturbing questions from the MedQA dataset, which comprises of medical board exam questions, we created a benchmark that captures the nuanced differences in medical diagnosis across varying patient profiles.To ensure that our perturbations did not alter the clinical outcomes, we implemented a filtering strategy to validate each perturbation, so that any performance discrepancies would be indicative of bias.Our findings reveal notable discrepancies in model performance when tested against these demographic variations.By releasing DiversityMedQA, we provide a resource for evaluating and mitigating demographic bias in LLM medical diagnoses.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.528 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.815 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.472 Zit.