Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Transforming Informed Consent Generation Using Large Language Models: Insights, Best Practices, and Lessons Learned for Clinical Trials (Preprint)
0
Zitationen
9
Autoren
2024
Jahr
Abstract
<sec> <title>BACKGROUND</title> Informed consent forms (ICFs) for clinical trials have become increasingly complex, often hindering participant comprehension and engagement due to legal jargon and lengthy content. The recent advances in large language models (LLMs) present an opportunity to streamline the ICF creation process while improving readability, understandability, and actionability. </sec> <sec> <title>OBJECTIVE</title> This study aims to evaluate the performance of the Mistral 8x22B LLM in generating informed consent forms with improved readability, understandability, and actionability. Specifically, we evaluate the model's effectiveness in generating ICFs that are readable, understandable, and actionable while maintaining the accuracy and completeness. </sec> <sec> <title>METHODS</title> We processed four clinical trial protocols from the institutional review board (IRB) of UMass Chan Medical School using the Mistral 8x22B model to generate key information sections of ICFs. A multidisciplinary team of eight evaluators, including clinical researchers and health informaticians, assessed the generated ICFs against human-generated counterparts for completeness, accuracy, readability, understandability, and actionability. Readability, Understandability, and Actionability of Key Information (RUAKI) indicators, which include 18 binary-scored items, were employed to evaluate these aspects, with higher scores indicating greater accessibility, comprehensibility, and actionability of the information. Statistical analysis, including Wilcoxon rank-sum tests and Intraclass Correlation Coefficient (ICC) calculations, were employed to compare outputs. </sec> <sec> <title>RESULTS</title> LLM-generated ICFs demonstrated comparable performance to human-generated versions across key sections, with no significant differences in accuracy and completeness (P > .10). The LLM outperformed human-generated ICFs in readability (RUAKI score of 76.39% vs. 66.67%, Flesch-Kincaid Grade Level of 7.95 vs. 8.38) and understandability (90.63% vs. 67.19%, P = .02). The LLM-generated content achieved a perfect score in actionability compared with the human-generated version (100% vs. 0%, P < .001). ICC for evaluator consistency was high at 0.83 (95% CI [0.64, 1.03]), indicating good reliability across assessments. </sec> <sec> <title>CONCLUSIONS</title> The Mistral 8x22B LLM significantly improves the readability, understandability, and actionability of ICFs without sacrificing accuracy or completeness. LLMs present a scalable, efficient solution for ICF generation, potentially enhancing participant comprehension and consent in clinical trials. </sec>
Ähnliche Arbeiten
World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects
2003 · 10.819 Zit.
Estimating the mean and variance from the median, range, and the size of a sample
2005 · 8.961 Zit.
SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials
2013 · 6.965 Zit.
The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research
2020 · 5.276 Zit.
The global landscape of AI ethics guidelines
2019 · 4.588 Zit.