Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The indexation of retracted literature in seven principal scholarly databases: a coverage comparison of dimensions, OpenAlex, PubMed, Scilit, Scopus, The Lens and Web of Science
29
Zitationen
2
Autoren
2024
Jahr
Abstract
Abstract In this study, the coverage and overlap of retracted publications, retraction notices and withdrawals are compared across seven significant scholarly databases, with the aim to check for discrepancies, pinpoint the causes of those discrepancies, and choose the best product to produce the most accurate picture of retracted literature. Seven scholarly databases were searched to obtain all the retracted publications, retraction notices and withdrawal from 2000. Only web search interfaces were used, excepting in OpenAlex and Scilit. The findings demonstrate that non-selective databases (Dimensions, OpenAlex, Scilit, and The Lens) index a greater amount of retracted literature than do databases that rely their indexation on venue selection (PubMed, Scopus, and WoS). The key factors explaining these discrepancies are the indexation of withdrawals and proceeding articles. Additionally, the high coverage of OpenAlex and Scilit could be explained by the inaccurate labeling of retracted documents in Scopus, Dimensions, and The Lens. 99% of the sample is jointly covered by OpenAlex, Scilit and WoS. The study suggests that research on retracted literature would require querying more than one source and that it should be advisable to accurately identify and label this literature in academic databases.
Ähnliche Arbeiten
International Journal of Scientific and Research Publications
2022 · 2.691 Zit.
Student writing in higher education: An academic literacies approach
1998 · 2.518 Zit.
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
2012 · 2.320 Zit.
Comparison of Two Methods to Detect Publication Bias in Meta-analysis
2006 · 2.209 Zit.
How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment
2023 · 1.976 Zit.