Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Art of AI Dialogue: Evaluating Applied AI Literacy in Medical Students Using a Performance-Based Rubric — A Single-Institution Observational Study
0
Zitationen
10
Autoren
2025
Jahr
Abstract
<title>Abstract</title> <bold>Background:</bold> Artificial intelligence (AI) is rapidly transforming healthcare and medical education. While medical students increasingly use generative AI tools in their academic work, existing studies on AI literacy have largely relied on self-reported surveys, providing limited insight into students’ actual behaviors. There remains a critical need for performance-based assessments that evaluate how students engage with AI in real-world tasks. This study aimed to evaluate medical students’ applied AI literacy through analysis of authentic academic artifacts using a structured, behaviorally anchored rubric. <bold>Methods:</bold> As part of a required Evidence-Based Medicine course, thirty third-year medical students submitted research proposals along with corresponding AI chat transcripts. Each submission was independently evaluated by three faculty members using a custom rubric assessing four domains: Transparency, Purposefulness (prompt generation), Verification & Critical Thinking (bias recognition), and Integration. Scores ranged from 0-3 per domain (maximum total: 12). <bold>Results:</bold> The average total score was 5.47 (SD = 1.71), indicating moderate applied AI literacy. Domain-level analysis revealed the highest performance in Transparency (M = 2.08, SD = 0.55) and Integration (M = 1.64, SD = 0.67), while Purposefulness (M = 1.33, SD = 0.69) and Verification & Critical Thinking (M = 0.41, SD = 0.71) were significantly lower. A Friedman test confirmed statistically significant differences across domains (χ²(3) = 50.36, p < 0.001). Post-hoc Wilcoxon signed-rank tests showed that Purposefulness and Verification scored significantly lower than both Transparency and Integration (all p < 0.001). Inter-rater reliability was high across domains (ICC = 0.83–0.93, all p < 0.001), supporting the consistency of the rubric-based evaluation. <bold>Conclusions:</bold> Performance-based evaluation revealed domain-specific weaknesses in applied AI literacy that remain invisible in self-report-based assessments. These findings support the integration of targeted instruction and authentic assessment into medical curricula to better prepare students for ethical and effective AI engagement. As AI continues to reshape clinical practice, equipping future physicians with these competencies is essential.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.