Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT for use as a resource for providing patient information relating to functional endoscopic sinus surgery -- a questionnaire-based assessment.
0
Zitationen
4
Autoren
2023
Jahr
Abstract
Introduction Ensuring that patients are well-informed in making health decisions has become increasingly pressing, particularly in light of resource constraints faced by the NHS. The emergence of artificial intelligence (AI) and natural language processing technologies, such as ChatGPT, offers potential solutions for delivering accessible patient information. This study explores the application of ChatGPT as a patient information tool, focusing on patients undergoing Functional Endoscopic Sinus Surgery (FESS) in the UK. Methods To evaluate the effectiveness of ChatGPT in providing patient information, the authors devised three common patient queries related to FESS. These questions were presented to both ChatGPT and three authors (including validation by a supervising Consultant) to generate a 150-word response. 20 qualified clinicians were blinded to responses and subsequently completed a 5-point Likert scale questionnaire to evaluate each response. Results When comparing mean scores between author and ChatGPT responses, it was found that there was no statistically significant difference for Accuracy, Completeness, Clarity or Appropriateness in any of the 1-3 questions asked. When explaining FESS, ChatGPT responses scored ≥50% on accuracy, clarity and appropriateness. ChatGPT responses scored lower in all areas when asked to described the alternatives to surgery. When explaining the risks of surgery, ChatGPTs responses scored higher on average. Conclusions This study establishes a foundational assessment of ChatGPT’s potential utility as a source of patient information within UK ENT departments. Notably, the study finds no significant disparities in the evaluations of accuracy, completeness, clarity, and appropriateness between ChatGPT-generated responses and those crafted by medical experts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.