Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Educational Strategies for Clinical Supervision of Artificial Intelligence Use
0
Zitationen
3
Autoren
2026
Jahr
Abstract
The emergence of artificial intelligence (AI), particularly large language models (LLMs), has the potential to fundamentally change medical practice. Gains in efficiency, however, come with the risk of reduced independent critical thinking when AI is used improperly, a tradeoff that extends to medical training. Overreliance on AI may lead to loss of previously acquired knowledge (“de-skilling”), failure to develop skills (“never-skilling”), and reinforcement of incorrect practices (“mis-skilling”). Many of these deficiencies stem from the limited transparency of LLM reasoning. Critical thinking remains essential for adaptability when facing uncertainty and bias, and the rapid integration of AI places this skill at the forefront of medical education. This review proposes a structured framework for educators and learners to develop critical thinking while engaging with AI in medical contexts. The use of AI in learning environments poses several risks. Because AI adoption is recent, both educators and learners are vulnerable to these challenges, emphasizing the need for community-based learning in which roles are adaptive and knowledge is shared. De-skilling and never-skilling can occur when AI is overly relied upon, resulting in off-loading of clinical reasoning that limits skill development and reinforcement. Studies have found associations between greater AI use and reduced critical thinking, particularly among younger participants. These findings are supported by a randomized clinical trial showing that performance on complex analytical tasks was hindered by AI use. In another study, clinicians exposed to biased AI-generated diagnostic predictions were more likely to incorporate these biases into their assessments, demonstrating the risk of mis-skilling. This effect correlates with baseline human performance: clinicians with lower baseline performance were further disadvantaged by AI, whereas those who outperformed AI achieved improved results when AI was appropriately integrated. To combat these risks, the authors propose a framework for educators to identify and navigate clinical encounters involving AI as opportunities for teaching. When an educator observes a learner interacting with an AI tool, the DEFT-AI framework (diagnosis, evidence, feedback, and teaching) is proposed to guide Socratic discussion. The process begins with diagnosis, discussion, and discourse, during which the learner explains both their clinical reasoning and how and why they engaged with AI. In the evidence phase, the learner provides supporting and opposing evidence for the clinical assessment and evaluates the strengths and limitations of AI in context. During the feedback phase, the educator encourages self-reflection on gaps in their clinical reasoning and AI literacy. The teaching stage builds on this reflection, with targeted instruction and recommendations that generally encourage continued AI engagement, though with variation in use and supervision. The authors further characterize AI engagement using the centaur-cyborg model. In centaur behavior, users strategically divide tasks between themselves and AI, reserving higher-risk decisions for human oversight. In cyborg behavior, humans and AI collaborate more tightly through iterative drafting and refinement, which can be effective for discrete, low-risk tasks but carries greater risk of overreliance. Educators are encouraged to help learners move flexibly between these modes through cognitive pauses and critical evaluation of AI use. Clinical guidelines, published literature, and expert consultation should be applied to assess AI output, and prompts should be specific, context-aware, and unbiased. Providing examples and requesting clear explanations of AI reasoning can further support critical adjudication. Overall, the authors support educational practices that integrate with AI rather than disavow it. Verification and critical evaluation of AI output are central to effective AI education and, when applied using the DEFT-AI method, can bolster both clinical reasoning and AI literacy. (Abstracted from N Engl J Med. 2025;393(8). doi:10.1056/NEJMra2503232)
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.