Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and Patient Information in Nuclear Medicine: GPT-3.5 Versus GPT-4
49
Zitationen
3
Autoren
2023
Jahr
Abstract
The GPT-3.5-powered ChatGPT was released in late November 2022 powered by the generative pretrained transformer (GPT) version 3.5. It has emerged as a readily accessible source of patient information ahead of medical procedures. Although ChatGPT has purported benefits for supporting patient education and information, actual capability has not been evaluated. Moreover, the March 2023 emergence of paid subscription access to GPT-4 promises further enhanced capabilities requiring evaluation. <b>Methods:</b> ChatGPT was used to generate patient information sheets suitable for gaining informed consent for 7 common procedures in nuclear medicine. Responses were generated independently for both GPT-3.5 and GPT-4 architectures. Specific procedures were selected that had a long-standing history of use to avoid any bias associated with the September 2021 learning cutoff that constrains both GPT-3.5 and GPT-4 architectures. Each information sheet was independently evaluated by 3 expert assessors and ranked on the basis of accuracy, appropriateness, currency, and fitness for purpose. <b>Results:</b> ChatGPT powered by GPT-3.5 provided patient information that was appropriate in terms of being patient-facing but lacked accuracy and currency and omitted important information. GPT-3.5 produced patient information deemed not fit for the purpose. GPT-4 provided patient information enhanced across appropriateness, accuracy, and currency, despite some omission of information. GPT-4 produced patient information that was largely fit for the purpose. <b>Conclusion:</b> Although ChatGPT powered by GPT-3.5 is accessible and provides plausible patient information, inaccuracies and omissions present a risk to patients and informed consent. Conversely, GPT-4 is more accurate and fit for the purpose but, at the time of writing, was available only through a paid subscription.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.