Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Presence and Nature of AI-Use Disclosure Statements in Medical Education Journals: A bibliometric study
2
Zitationen
7
Autoren
2025
Jahr
Abstract
Abstract Background As AI-use becomes more common in research, disclosure policies have emerged to ensure transparency and appropriateness. However, database research in other fields suggests that disclosure may lag behind AI-use. Medical education journal editors report that submitted manuscripts rarely include AI-use disclosures, and they perceive a lack of clarity regarding when and how AI-use should be disclosed. However, we lack objective evidence regarding the incidence and nature of AI-use disclosure in medical education. Methods Using bibliometric methods, we searched a database of 24 leading medical education journals for articles published between January and July 2025 (n=2,762 articles). Screening with Covidence software excluded 716 non-empirical and/or non-English language articles. The remainder (n=2,046) were examined for the presence of AI-use disclosures, which were content-analyzed. Results 2.5% of empirical articles (n=51) had an AI disclosure statement. BMC Medical Education contained the most disclosures (24), followed by Medical Teacher (7) and Journal of Surgical Education (4). Forty-two articles were authored in non-native English-speaking countries, and 69.4% of all first authors had begun publishing in the past decade. Disclosures averaged 43 words and described use superficially: most commonly “editing” and “translation”. Of 18 named tools, ChatGPT was most common. Most disclosures explicitly attested to author responsibility for AI-produced material. Disclosures usually appeared in acknowledgements; those located in methods lacked responsibility attestation. Negative disclosures attesting that AI was not used were also present. Discussion AI-use disclosures in medical education journals are rare and appear mostly in work from non-native English-speaking regions of the world. A shared disclosure practice is evident: name the tool and affirm author responsibility, but describe use superficially. This suggests a practice of “safe” disclosure that may be more performative than informative, therefore failing to satisfy the goal of ensuring transparent and ethical AI use in research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.