OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.04.2026, 12:15

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Digital Native Paradox: A Framework for Critical Appraisal of Generative <scp>AI</scp> in Paediatric Medical Education

2026·0 Zitationen·Journal of Paediatrics and Child HealthOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The integration of generative artificial intelligence (AI) into healthcare is accelerating, with students and trainees, as digital natives, often becoming its earliest and most enthusiastic adopters [1]. The risk of AI in paediatrics is magnified, where diagnostic and therapeutic decisions carry unique ethical and safety considerations and failure to contextualise a differential diagnosis for a child's specific age and weight can have devastating consequences. Currently, educational efforts continue to focus on how to use AI for example, prompt engineering [2]. However, adoption cannot be decoupled from validation. For the digital native trainee, the primary challenge is integrating the critical appraisal of AI-generated output into their clinical workflow as a core competency. There is a flawed assumption that trainees' digital fluency equates to safe use of AI. Their ability to critically evaluate the quality, veracity and applicability of digital information is not guaranteed. This creates an appraisal paradox: the very individuals most comfortable using AI are often the least equipped to identify its failures. Their nascent clinical experience leaves them uniquely vulnerable to the plausible sounding, confident and sophisticated hallucinations that generative AI produces. This mirrors a form of the Dunning-Kruger effect, where trainees are unconsciously incompetent in complex areas [3]. They lack the deep, tacit knowledge of a senior clinician, who might sense a subtle error. This makes them susceptible to accepting flawed AI guidance for example, when AI provides a differential diagnosis for a neonate with hypoglycaemia, the trainee may be unable to spot the missing, time-critical metabolic disorder or recognise that the suggested drug dose is based on an adult algorithm. This vulnerability aligns with the risks of AI in medical education identified by Abdulnour et al., specifically ‘mis-skilling’, where trainees internalise incorrect AI-generated logic and ‘de-skilling’, where reliance on the tool leads to the atrophy of foundational clinical reasoning [4]. Without intentional scaffolding, there is a risk of being a passive consumer of information rather than an active validator of clinical evidence. Modern medicine is built on evidence-based practice, which explicitly trains clinicians to not take information at face value [5]. Structured tools such as the CASP (Critical Appraisal Skills Programme) checklists, allow us to deconstruct a study's methodology, assess its validity and determine its applicability [6]. AI output is an opaque and therefore problematic form of evidence. We cannot assess its ‘black box’ methodology. Therefore, we must shift our focus to appraising the output itself. To address this critical educational gap, I propose the VALIDATE framework, see Figure 1 and Table 1. This framework provides a practical, cognitive scaffold that builds critical appraisal skills for AI output. It begins by guiding the user to verify the output's factual accuracy and provenance against external, gold-standard sources. It then prompts an assessment of internal consistency, potential bias and relevance to the specific paediatric context. The framework moves beyond simple fact-checking to encourage a deeper critique, requiring the user to analyse gaps (omissions) in the AI's reasoning and test the output's robustness. Finally, it instils a crucial safety net, mandating escalation whenever the AI output suggests a novel course of action or replaces the trainee's foundational knowledge. Although broader frameworks such as DEFT-AI provide a strategy for clinical supervisors to oversee AI use [4], VALIDATE provides the specific cognitive tactic for the learner. It operationalises the ‘Evaluative’ component of supervision, giving the trainee a structured internal checklist to perform before presenting their plan to a senior (Box 1). AI Output: Suggests fluid resuscitation and anti-emetics but misses metabolic differentials. The increasing use of AI in healthcare creates significant governance, patient safety and professionalism issues. The VALIDATE framework can serve as a practical tool for integrating AI use with existing professional standards. VALIDATE is therefore designed as a curricular expectation for paediatric trainees, operationalising the critical appraisal of AI as a core competency akin to evidence-based medicine. Clinical supervisors also have a critical role in modelling safe AI behaviour. In the same way a consultant asks, ‘What does the latest literature say?’, they must now ask, ‘You've used AI to help formulate this plan. Walk me through how you validated its output’. This normalises a culture of sceptical trust and provides supervisors with a tool to assess a trainee's digital professionalism [7]. AI is a tool whose safety is entirely conditional on the judgement of its user. The digital native trainee, while comfortable with the interface, is uniquely vulnerable to the subtle, authoritative sounding failures of these systems. We must urgently train our future doctors to be its critical validators, not its passive consumers. Open access publishing facilitated by The University of Newcastle, as part of the Wiley - The University of Newcastle agreement via the Council of Australasian University Librarians. The author has nothing to report. The author declares no conflicts of interest. Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen