OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 10:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Using Generative Artificial Intelligence (AI) to Compare Commission on Osteopathic College Accreditation (COCA) and Liaison Committee on Medical Education (LCME) Standards

2025·0 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Background The Liaison Committee on Medical Education (LCME) for medical doctors (MDs) and Commission on Osteopathic College Accreditation (COCA) for osteopathic doctors (DOs) serve as the accrediting bodies for US medical schools. Although conventional wisdom suggests a number of differences between the two sets of standards, there are relatively few studies that parse out substantive distinctions in greater detail. Objective The objective of this research project was to identify significant differences between the LCME and COCA standards and elements. Design This study utilized three generative chatbots, ChatGPT-4o (Open AI, California, US), Gemini 2.0 Flash (Google DeepMind, London, England and Google Research, California, US), and Grok 3 (xAI, California, US), to ascertain key distinctions. An identical prompt was given to each chatbot. The chatbots ranked the key differences between the standards from most significant difference to least significant difference. Key results Twenty-three themes were identified. The chatbots collectively agreed on two distinct differences: osteopathic manipulative medicine in DOs requirements, and differing approaches to diversity, equity, and inclusion between the standards. From there, results varied, but included, for example, publishing student outcomes at DO schools, leadership requirements, research requirements, and student narrative expectations, to name a few. Conclusions Discourse has posited that the standards have grown necessarily similar. However, within the elements' details, evident differences remain. Although the chatbot results drew clear distinctions, some responses were less compelling as significant, including, for example, student access to mental healthcare, interaction with residents, and the reporting of major changes to the accreditor. There were several limitations, including the chatbots selected, which were among popular and publicly available systems. Two errors were produced by and reconciled with the chatbots. The study only included the standards and elements, and not additional requirements such required narrative prompts associated with elements. Therefore, the clear intent of each element may not have been recognized by the chatbots.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsInnovations in Medical Education
Volltext beim Verlag öffnen