Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT’s Response Consistency: A Study on Repeated Queries of Medical Examination Questions
46
Zitationen
10
Autoren
2024
Jahr
Abstract
< 0.001). (4) Conclusions: The findings underscore the increased accuracy and dependability of ChatGPT 4 in the context of medical education and potential clinical decision making. Nonetheless, the research emphasizes the indispensable nature of human-delivered healthcare and the vital role of continuous assessment in leveraging AI in medicine.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.674 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.583 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.105 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.862 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- Friedrich Schiller University Jena(DE)
- Technical University of Munich(DE)
- TUM Klinikum(DE)
- Harvard University(US)
- Massachusetts General Hospital(US)
- Queen Mary University of London(GB)
- Guangdong Provincial People's Hospital(CN)
- Erasmus University Rotterdam(NL)
- Pontifícia Universidade Católica do Rio de Janeiro(BR)
- Instituto Ivo Pitanguy(BR)
- Ludwig-Maximilians-Universität München(DE)