Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Medical students perceptions and attitudes toward the use of generative artificial intelligence in clinical decision-making: a nationwide cross-sectional survey in China
0
Zitationen
7
Autoren
2026
Jahr
Abstract
The deep integration of artificial intelligence (AI) into healthcare is reshaping medical practice and education globally. As an emerging technology, generative AI (GenAI) demonstrates significant potential for application in clinical decision-making. Systematically understanding medical students’ perceptions and attitudes toward GenAI is crucial for promoting its responsible implementation in the medical field. This study aimed to investigate Chinese medical students’ perceptions, usage behaviors, and attitudes regarding the use of GenAI for clinical decision-making. This exploratory cross-sectional study was conducted via an online questionnaire from January to March 2025. A total of 1062 medical students from 168 universities and colleges across 29 provinces in China were recruited through convenience sampling. The survey, developed based on the Technology Acceptance Model (TAM) and validated by expert review and pilot testing, descriptively assessed three dimensions: usage, perceptions, and attitudes toward GenAI in clinical decision-making. Descriptive statistics, including frequencies and 95% confidence intervals, were used for data analysis. The vast majority of students (99.4%, n = 1056) reported prior experience with GenAI. The primary application was course learning (71.8%, 95% CI [0.690–0.744]); in contrast, direct use in clinical decision-making was reported less frequently (44.0%, 95% CI [0.410–0.470]). Students widely recognized GenAI’s benefits in broadening knowledge (73.4%, 95% CI [0.706–0.759]), fostering multi-perspective clinical thinking (67.8%, 95% CI [0.649–0.705]), and improving efficiency (63.1%, 95% CI [0.601–0.659]). They also noted significant limitations: primarily its inability to account for individual patient differences in diagnosis (70.7%, 95% CI [0.679–0.734]) and susceptibility to input data bias (65.6%, 95% CI [0.627–0.684]). Most students (71.7%, n = 762) were willing to use GenAI in the future, yet strongly opposed its complete replacement of healthcare professionals (79.4%, n = 843) and advocated for safeguards such as strict output auditing (69.6%, 95% CI [0.668–0.723]). This study reveals that medical students maintain a “cautious embrace” attitude toward GenAI: actively utilizing them while consistently emphasizing the central importance of professional judgment. This finding suggests that medical education should focus on cultivating future healthcare professionals who can skillfully employ GenAI as a supportive tool, while steadfastly adhering to critical AI literacy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.