Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating AI into infection control: Evaluating the accuracy and consistency of four leading platforms across three regions
0
Zitationen
10
Autoren
2026
Jahr
Abstract
Background: Artificial Intelligence (AI) has emerged as a valuable tool in health care, supporting diagnostics and decision making. However, integration into clinical practice presents challenges, including data quality, accessibility, and regional guideline variations. This study evaluates four major AI platforms (ChatGPT, Meta AI, Copilot, and OpenEvidence) against CDC infection control guidelines for varicella and measles across Canada, Malaysia, and the United Kingdom. Objectives: To assess the accuracy and consistency of AI responses on infection control measures compared with CDC guidelines and to evaluate how platforms handle complex scenarios. Methods: A comparative analysis of the four AI platforms was conducted using structured questions and clinical case scenarios on varicella and measles. Responses were evaluated for alignment with CDC guidelines. Platform accessibility was tested from Canada, Malaysia, and the United Kingdom, and regional variations were analyzed. Results: All platforms provided generally accurate information, but discrepancies were noted. ChatGPT and Meta AI mostly aligned with CDC guidelines, while OpenEvidence and Copilot omitted key epidemiological criteria. Meta AI lacked a full explanation of varicella laboratory criteria and was inaccessible in Malaysia. Regional differences in measles postexposure prophylaxis (PEP) were observed, particularly in Copilot and OpenEvidence. Response consistency varied among platforms. Conclusions: AI platforms show promise in supporting infection control but exhibit regional variability. Continued refinement of AI tools is essential to ensure their global applicability and accuracy. Consultation with an infection prevention and control (IPAC) physician remains vital for complex cases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.