Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When AI goes wrong: Fatal errors in oncological research reviewing assistance Open AI based
15
Zitationen
1
Autoren
2024
Jahr
Abstract
In this letter to the editor, the use of artificial intelligence (AI) techniques, specifically the Chat-GPT based “Review Assistant” by Elsevier, for reviewing scientific articles is discussed. While the tool has many benefits such as detecting linguistic and typographical errors in manuscripts, it also has limitations. An example is highlighted where the AI gave an incorrect and potentially dangerous answer regarding the bond energies of molecules in an oral tumor. This mistake shows that the use of AI for scientific research evaluation can be a double-edged sword, as it may provide inaccurate information that could have serious consequences.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.