Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Experts’ opinion following ChatGPT’s responses to prompts on Forensic Odontology—preliminary investigation
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Abstract Background Large language models (LLMs) have been extensively used in the past 5 years. Generative pre-trained transformers based on LLM algorithms, such as ChatGPT, became a paradigm shift in the medical field, including specialties such as Forensic Odontology. The accuracy of their responses to technical commands, however, is debatable. This study aimed to assess the quality of ChatGPT responses through the lenses of experts in Forensic Odontology. Eleven prompt commands were presented individually to ChatGPT™ 3.5 (OpenAI, San Francisco, CA, USA). The commands had three designs, (I) textual objective ( n = 3), (II) numeric objective ( n = 4), and subjective ( n = 4), and addressed three main topics of Forensic Odontology—dental human identification, dental age estimation, and bite mark analysis. The responses obtained from the chatbot were presented to 19 experts in Forensic Odontology (all academic professionals) so they could rate the quality of the responses with a 5-point Likert scale. Results The responses were rated moderate, bad, or very bad in 60% of the cases. Textual objective commands had an accuracy of 55.78%, numeric objective commands had an accuracy of 68.94%, and subjective commands reached an accuracy of 63.94%. The best answers were provided to dental human identification commands (69.47%), followed by dental age estimation (67.01%) and bite mark analysis (63.15%). Conclusions In its current form, ChatGPT 3.5 is not able to provide fully reliable information to the self-study process of Forensic Odontology when deeper layers of knowledge are required. The chatbot can be, however, a supplement to respond to more objective and superficial commands that do not depend on deep search, contextual analysis, and detailed descriptions based on scientific articles.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.