Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating public perception on use of ChatGPT in initial consultations prior to healthcare provider consultations
1
Zitationen
7
Autoren
2024
Jahr
Abstract
Background: This study investigates public perception of using AI-powered ChatGPT for initial consultations before seeing healthcare providers. It aims to understand AI’s capabilities and how to implement such tools in healthcare settings. Methods: A survey containing nineteen questions was distributed to 391 participants aiming to explore public perceptions on the use of ChatGPT prior to initial healthcare consultations via questions pertaining to several domains regarding use of ChatGPT in healthcare. Collected data was summarized and presented through visual depictions. Continuous variables with normal distributions were expressed as median and interquartile range, while categorical variables are represented as numbers and percentages. Statistical significance was assessed using the Mann–Whitney U test for continuous variables and the chi-square test (χ²) for categorical variables. Results: The median satisfaction score was 3.00 (IQR: 2.00, 3.00), providing insights into user satisfaction. 42.7% believed AI-powered chatbots adequately addresses healthcare concerns. Comfort levels in sharing health information and confidence (accuracy, reliability) had median scores of 3.00 (IQR: 2.00, 4.00) and 3.00 (IQR: 2.00, 3.00), respectively, on a scale of 1 to 5. 31.2% were willing to try AI in their next consultation, 40.4% were unsure, and 28.4% declined. Notably, 64.5% preferred interacting with a human healthcare provider, and 46.0% expressed that their comfort with AI use depended on the specific healthcare concern. Conclusion: The current findings shed light on the importance of understanding the capabilities of ChatGPT. Further research needs to be carried out to better understand how ChatGPT can be implemented in the current era.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.