Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transforming Informed Consent Generation Using Large Language Models: Mixed Methods Study
18
Zitationen
9
Autoren
2025
Jahr
Abstract
Background: Informed consent forms (ICFs) for clinical trials have become increasingly complex, often hindering participant comprehension and engagement due to legal jargon and lengthy content. The recent advances in large language models (LLMs) present an opportunity to streamline the ICF creation process while improving readability, understandability, and actionability. objectives: This study aims to evaluate the performance of the Mistral 8x22B LLM in generating ICFs with improved readability, understandability, and actionability. Specifically, we evaluate the model's effectiveness in generating ICFs that are readable, understandable, and actionable while maintaining the accuracy and completeness. Methods: We processed 4 clinical trial protocols from the institutional review board of UMass Chan Medical School using the Mistral 8x22B model to generate key information sections of ICFs. A multidisciplinary team of 8 evaluators, including clinical researchers and health informaticians, assessed the generated ICFs against human-generated counterparts for completeness, accuracy, readability, understandability, and actionability. Readability, Understandability, and Actionability of Key Information indicators, which include 18 binary-scored items, were used to evaluate these aspects, with higher scores indicating greater accessibility, comprehensibility, and actionability of the information. Statistical analysis, including Wilcoxon rank sum tests and intraclass correlation coefficient calculations, was used to compare outputs. Results: LLM-generated ICFs demonstrated comparable performance to human-generated versions across key sections, with no significant differences in accuracy and completeness (P>.10). The LLM outperformed human-generated ICFs in readability (Readability, Understandability, and Actionability of Key Information score of 76.39% vs 66.67%; Flesch-Kincaid grade level of 7.95 vs 8.38) and understandability (90.63% vs 67.19%; P=.02). The LLM-generated content achieved a perfect score in actionability compared with the human-generated version (100% vs 0%; P<.001). Intraclass correlation coefficient for evaluator consistency was high at 0.83 (95% CI 0.64-1.03), indicating good reliability across assessments. Conclusions: The Mistral 8x22B LLM showed promising capabilities in enhancing the readability, understandability, and actionability of ICFs without sacrificing accuracy or completeness. LLMs present a scalable, efficient solution for ICF generation, potentially enhancing participant comprehension and consent in clinical trials.
Ähnliche Arbeiten
World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects
2003 · 10.822 Zit.
SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials
2013 · 7.012 Zit.
Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials
1995 · 5.586 Zit.
The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research
2020 · 5.435 Zit.
The global landscape of AI ethics guidelines
2019 · 4.809 Zit.