Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Off‐the‐Shelf Large Language Models for Guiding Pharmacoepidemiological Study Design
1
Zitationen
6
Autoren
2025
Jahr
Abstract
This study aimed to assess the ability of two off-the-shelf large language models, ChatGPT and Gemini, to support the design of pharmacoepidemiological studies. We assessed 48 study protocols of pharmacoepidemiological studies published between 2018 and 2024, covering various study types, including disease epidemiology, drug utilization, safety, and effectiveness. The coherence (i.e., "Is the response coherent and well-formed, or is it difficult to understand?") and relevance (i.e., "Is the response relevant and informative, or is it lacking in substance?") of the large language models' responses were evaluated by human experts across seven key study design components. Coding accuracy was assessed. Both large language models demonstrated high coherence, with over 90% of study components rated as "Strongly agree" by experts for most categories. ChatGPT achieved the highest coherence for "Index date" (97.9%) and "Study design" (95.8%). Gemini excelled in "Study outcome" (93.9%) and "Study exposure" (95.9%). Relevance, however, was more variable, with ChatGPT aligning with expert recommendations in over 90% of cases for "Index date" and "Study design" but showing lower agreement for covariates (65%) and follow-up (70%). Coding agreement percentages reveal varying levels of concordance, with the Anatomical Therapeutic Chemical classification system coding system demonstrating the highest agreement at 50% with experts. In contrast, the Current Procedural Terminology and International Classification of Diseases systems showed agreements of 22.2% and 20%, respectively. While ChatGPT and Gemini show promise in certain tasks supporting pharmacoepidemiological study design, their limitations in relevance and coding accuracy highlight the need for critical oversight by domain experts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.