Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
<scp>AI</scp> Is Not a Four‐Letter Word: Moving From Resistance to Responsible Integration in Emergency Medicine Education
0
Zitationen
3
Autoren
2025
Jahr
Abstract
At 7:00 PM in our pediatric emergency department, a first-year resident approached me about a 5-year-old with unilateral painful neck swelling. It was early July, and she was struggling to apply her medical school knowledge base to the patient in front of her. She had good fundamental knowledge—she had taken an excellent history and created a comprehensive differential that included both benign conditions like reactive lymphadenopathy and urgent concerns like Ludwig's angina. But she couldn't effectively prioritize these possibilities or translate them into an actionable workup. She had the information but couldn't synthesize it into clinical decision-making. Role: You are a pediatric emergency medicine reference tool. Action: Generate a structured diagnostic framework. Context: 5-year-old with acute onset unilateral painful neck swelling, no fever, otherwise well-appearing. Expectation: Provide a differential diagnosis with likelihood ratios for this age group and key clinical features for each condition. The AI provided a comprehensive differential including Ludwig's angina, reactive lymphadenopathy, and bacterial lymphadenitis, complete with likelihood ratios and distinguishing features. We played around with the input, deciding to change variables in real time. “What if the swelling were bilateral? What if there were fever and drooling?” Each modification generated updated differentials and clinical reasoning pathways. Throughout, I validated the AI's suggestions, corrected inaccuracies, and emphasized the clinical reasoning behind each decision. What struck me most was how this tool allowed us to explore clinical reasoning in a way that would have been nearly impossible without extensive preparation. Small uses of AI like this have allowed me to take quick moments between the constant flow of patients, alarms, and urgent calls and transform them into meaningful educational experiences. Medical institutions today face a contradiction in their approach to artificial intelligence. They readily adopt AI tools for operational efficiency and financial advantage—applications with easily measurable return on investment [2]. Yet these same institutions often restrict AI in educational contexts, driven by a narrative that centers on ethical concerns about unvetted educational tools, technological plagiarism, and compromising educational integrity [3, 4]. This approach creates unintended consequences. Our experience and the literature suggest that medical students and residents already use AI, seeking education from sources like YouTube videos, social media, and friends rather than their medical institutions [5]. This disconnect between institutional policy and trainee reality proves problematic. When institutions pretend AI use doesn't exist, they drive use underground, creating the very safety risks and educational integrity issues they sought to prevent. Research demonstrates this pattern extends beyond individual institutions. Medical education progresses through “culture shock” phases regarding AI integration, from initial honeymoon enthusiasm through frustration to eventual adaptation [6]. Rather than remaining stuck in institutional frustration through restrictive policies, we could guide faculty and trainees through this transition constructively—moving past fear-based prohibition to integrate AI tools meaningfully into our educational mission. Fortunately, evidence from multiple specialties supports optimistic engagement. Radiology residency programs found that despite varied methodologies, all AI curricula demonstrated positive impacts on trainee knowledge and attitudes, with researchers concluding that starting AI education matters more than perfect implementation [7]. AI tools can serve as accessible, on-demand cognitive bridges that help learners connect different pieces of information into actionable clinical insights [8, 9]. This addresses a fundamental challenge in medical education: the gap between knowledge acquisition and clinical application. Many residents possess extensive factual knowledge but struggle to synthesize this information rapidly in real-world clinical scenarios. Equally important, AI can function as a diagnostic learning tool for faculty, identifying specific trainee knowledge gaps and supporting precision education [9, 10]. Instead of delivering generic teaching sessions that may miss individual learning needs, faculty can target their limited time and energy more effectively. This creates opportunities for personalized education that meets learners where they are, rather than where we assume them to be. The ultimate goal is not technological integration but enhanced patient care and learning. When institutions accept responsible AI use, we free mental space currently consumed by routine cognitive tasks. A recent study showed promising improvement in cognitive burden through the use of AI scribes on trainees [11]. Another study showed the power of teaching emergency residents how to break bad news through engagement with an AI chatbot [12]. This type of work shows the promise of AI tools for trainees—to allow them more time for the interpersonal skills that define excellent physicians: active listening, empathy, and therapeutic communication. Rather than disconnecting us from patients, thoughtful AI integration can create space for deeper human connection. However, studies like these are not enough. Pilot studies are deeply important, but if we expect to make meaningful change, GME leaders need to start working with these tools to understand their potential impact and pitfalls. Faculty with clinical experience and foundational knowledge are perfectly positioned to guide responsible AI integration. I thought back to that intern later in the week. Without that spontaneous suggestion, she might have struggled through the case, unaware there was a tool that could help her organize her thinking. How many other moments like this are we missing in our training environments? This is not about promoting uncritical AI use. It's about recognizing the need for deliberate, transparent, and educationally sound integration—led by GME. The good news: we don't need to start from scratch. Many of the solutions already exist—we just need to make them accessible, ethical, and GME-specific (Tables 1 and 2). Medicine has always been enhanced by technology that amplifies human capability rather than replacing human judgment. AI represents the next step in this progression. By accepting and guiding this integration, we can create space for the teaching, reflection, and patient connection that define humanistic medicine. The intern struggling to create an actionable plan amidst the busy emergency department, the attending pressed for time with multiple acute cases, the worried patient seeking reassurance—all deserve an educational system that uses technology to create more space for human connection, not less. Steven McGaughey: conceptualization, writing – original draft, writing – review and editing. Jordan Wackett: conceptualization, writing – review and editing. Elizabeth Silbermann: conceptualization, writing – original draft, writing – review and editing. The authors have nothing to report. The authors declare no conflicts of interest. The authors have nothing to report.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.589 Zit.
Making sense of Cronbach's alpha
2011 · 13.819 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.630 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.479 Zit.
Evidence-Based Medicine
1992 · 4.151 Zit.