OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 13:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Demographic biases in AI-generated simulated patient cohorts: a comparative analysis against census benchmarks

2025·3 Zitationen·Advances in SimulationOpen Access
Volltext beim Verlag öffnen

3

Zitationen

2

Autoren

2025

Jahr

Abstract

BACKGROUND: Generative artificial intelligence models are being introduced as low-cost tools for creating simulated patient cohorts in undergraduate medical education. Their educational value, however, depends on the extent to which the synthetic populations mirror real-world demographic diversity. We therefore assessed whether two commonly deployed large language models produce patient profiles that reflect the current age, sex, and ethnic composition of the UK. METHODS: GPT-3.5-turbo-0125 and GPT-4-mini-2024-07-18 were each prompted, without demographic steering, to generate 250 UK-based 'patients'. Age was returned directly by the model; sex and ethnicity were inferred from given and family names using a validated census-derived classifier. Observed frequencies for each demographic variable were compared with England and Wales 2021 census expectations by chi-square goodness-of-fit tests. RESULTS: Both cohorts diverged significantly from census benchmarks (p < 0.0001 for every variable). Age distributions showed an absence of very young and older individuals, with certain middle-aged groups overrepresented (GPT-3.5: χ2(17) = 1310.4, p < 0.0001; GPT4mini: χ2(17) = 1866.1, p < 0.0001). Neither model produced patients younger than 25 years; GPT-3.5 generated no one older than 47 years and GPT-4-mini no one older than 56 years. Gender proportions also differed markedly, skewing heavily toward males (GPT-3.5: χ2(1) = 23.84, p < 0.0001; GPT4mini: χ2(1) = 191.7, p < 0.0001). Male patients constituted 64.7% and 92.8% of the two cohorts. Name diversity was limited: GPT-3.5 yielded 104 unique first-last-name combinations, whereas GPT-4-mini produced only nine. Ethnic profiles were similarly imbalanced, featuring overrepresentation of some groups and complete absence of others (χ2(10) = 42.19, p < 0.0001). CONCLUSIONS: In their default state, the evaluated models create synthetic patient pools that exclude younger, older, female and most minority-ethnic representations. Such demographically narrow outputs threaten to normalise biased clinical expectations and may undermine efforts to prepare students for equitable practice. Baseline auditing of model behaviour is therefore essential, providing a benchmark against which prompt-engineering or data-curation strategies can be evaluated before generative systems are integrated into formal curricula.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationSimulation-Based Education in HealthcareSurgical Simulation and Training
Volltext beim Verlag öffnen