Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The landscape of artificial intelligence tools and platforms for evidence synthesis: a scoping review
1
Zitationen
7
Autoren
2026
Jahr
Abstract
Evidence synthesis (ES) involves rigorous, reproducible methodologies, which are increasingly being presented as 'Living' systematic reviews. As such, ES are critical to evidence-informed decision-making processes, such as the development, implementation, evaluation and monitoring of health technology assessments, practice guidelines and policies. However, the ES process is time-intensive, typically requiring months or years and extensive manual effort. Technological advancements, particularly artificial intelligence (AI), offer opportunities to automate various ES steps, potentially increasing efficiency and reducing costs. AI tools and platforms, including large language models (LLMs), facilitate faster ES through advanced natural language processing (NLP) capabilities. Despite their potential, AI tools have limitations, including risks of automation bias and lack of true semantic understanding, requiring careful evaluation to ensure trustworthiness. We conducted the first scoping review to update and map all data science tools, including LLMs, which are either being developed and/or deployed to optimise ES steps and assess their impact in both low- and middle-income countries (LMICs) and high-income countries (HICs). Our scoping review identified 137 studies and 388 of such AI tools and platforms to respond to the World Health Organization's call for safe and ethical AI in health, documenting the current landscape to identify barriers and facilitators to equitable and sustainable access for glocal researchers. We further outline three recommendations: (1) promote collaborative AI platforms ensuring equity of access to include gap regions identified (Latin America, Africa, Middle East), (2) establish evaluation standards for methods testing and reporting, and (3) emphasise human input and multidisciplinary capacity building for developing and implementing AI tools in ES.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.