OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 21:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Experts’ opinion following ChatGPT’s responses to prompts on Forensic Odontology—preliminary investigation

2025·0 Zitationen·Egyptian Journal of Forensic SciencesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Abstract Background Large language models (LLMs) have been extensively used in the past 5 years. Generative pre-trained transformers based on LLM algorithms, such as ChatGPT, became a paradigm shift in the medical field, including specialties such as Forensic Odontology. The accuracy of their responses to technical commands, however, is debatable. This study aimed to assess the quality of ChatGPT responses through the lenses of experts in Forensic Odontology. Eleven prompt commands were presented individually to ChatGPT™ 3.5 (OpenAI, San Francisco, CA, USA). The commands had three designs, (I) textual objective ( n = 3), (II) numeric objective ( n = 4), and subjective ( n = 4), and addressed three main topics of Forensic Odontology—dental human identification, dental age estimation, and bite mark analysis. The responses obtained from the chatbot were presented to 19 experts in Forensic Odontology (all academic professionals) so they could rate the quality of the responses with a 5-point Likert scale. Results The responses were rated moderate, bad, or very bad in 60% of the cases. Textual objective commands had an accuracy of 55.78%, numeric objective commands had an accuracy of 68.94%, and subjective commands reached an accuracy of 63.94%. The best answers were provided to dental human identification commands (69.47%), followed by dental age estimation (67.01%) and bite mark analysis (63.15%). Conclusions In its current form, ChatGPT 3.5 is not able to provide fully reliable information to the self-study process of Forensic Odontology when deeper layers of knowledge are required. The chatbot can be, however, a supplement to respond to more objective and superficial commands that do not depend on deep search, contextual analysis, and detailed descriptions based on scientific articles.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMedical Malpractice and Liability IssuesClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen