Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Validating AI-assisted evaluation of open science practices in brain sciences: ChatGPT, Claude, and human expert comparisons
0
Zitationen
4
Autoren
2025
Jahr
Abstract
This study investigates the efficacy of AI-assisted evaluation of open science practices in brain sciences, comparing ChatGPT 4 and Claude 3.5 Sonnet against human expert assessment. We analysed 100 randomly selected journal articles across various brain science disciplines using a 6-item transparency checklist. Three human experts and two AI chatbots independently evaluated the articles. Results showed strong correlations between human and AI chatbot overall ratings. Both chatbots demonstrated high concordance with humans in assessing code sharing, materials availability, preregistration, and sample size rationales. However, they struggled with accurately identifying the presence of data availability statements and assessing public accessibility of shared data. These findings suggest that AI chatbots can effectively support the evaluation of some open science practices and potentially expedite the assessment process in academic research. However, their limitations in certain areas highlight the continued importance of human oversight in ensuring comprehensive and accurate evaluations of scientific transparency.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.513 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.407 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.882 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.571 Zit.