Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI = Appropriate Insight? ChatGPT Appropriately Answers Parents' Questions for Common Pediatric Orthopaedic Conditions
19
Zitationen
4
Autoren
2023
Jahr
Abstract
<b>Background:</b> Artificial intelligence services, such as ChatGPT (generative pre-trained transformer), can provide parents with tailored responses to their pediatric orthopaedic concerns. We undertook a qualitative study to assess the accuracy of the answer provided by ChatGPT in comparison to OrthoKids ("OK"), a patient-facing educational platform governed by the Pediatric Orthopaedic Society of North America (POSNA) for common pediatric orthopaedic conditions. <b>Methods:</b> A cross-sectional study was performed from May 26 to June 18, 2023. OK website (orthokids.org) was reviewed and 30 existing questions were collected. The corresponding OK and ChatGPT responses were recorded. Two pediatric orthopaedic surgeons assessed the answer provided by ChatGPT against the OK response. Answers were graded as: AGREE (accurate information; question addressed in full), NEUTRAL (accurate information; question not answered), DISAGREE (information was inaccurate or could be detrimental to patients' health). The evaluators' responses were compiled; discrepancies were adjudicated by a third pediatric orthopaedist. Additional chatbot answer characteristics such as unprompted treatment recommendations, bias, and referral to a healthcare provider were recorded. Data was analyzed using descriptive statistics. <b>Results:</b> The chatbot's answers were agreed upon in 93% of questions. Two responses were felt to be neutral. No responses met disagreement. Unprompted treatment recommendations were included in 55% of its responses (excluding treatment-specific questions). The chatbot encouraged users to "consult with a healthcare professional" in all responses. It was nearly an equal split between recommending a generic provider (46%) in contrast to specifically stating a pediatric orthopaedist (54%). The chatbot was inconsistent in related topics in its provider recommendations, such as recommending a pediatric orthopaedist in 3 of 5 spine conditions. <b>Conclusion:</b> Questions pertaining to common pediatric orthopaedic conditions were accurately represented by a chatbot in comparison to a specialty society-governed website. The knowledge that chatbots deliver appropriate responses is reassuring. However, the chatbot frequently offered unsolicited treatment recommendations whilst simultaneously inconsistently recommending an orthopaedic consultation. We urge caution to parents utilizing artificial intelligence without also consulting a healthcare professional. <b>Level of Evidence:</b> IV <b>Key Concepts</b>•Artificial intelligence chatbots are becoming increasingly popular, as demonstrated by the rapid rise of publications on the topic in the last 3 months, and they represent a novel patient education online platform.•In comparing 30 common pediatric orthopaedic conditions, >90% of the chatbot's responses were felt to be in agreement with a specialty society's parent-patient-facing education platform.•The chatbot's responses were largely unbiased and referred patients to a healthcare professional. However, the responses lacked references or citing sources for the provided information.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.