Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension
340
Zitationen
39
Autoren
2020
Jahr
Abstract
The CONSORT 2010 statement provides minimum guidelines for reporting randomised trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders), and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret, and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.
Autoren
- Xiaoxuan Liu
- Samantha Cruz Rivera
- David Moher
- Melanie Calvert
- Alastair K. Denniston
- Hutan Ashrafian
- Andrew L. Beam
- An-Wen Chan
- Gary S. Collins
- Ara DarziJonathan J Deeks
- M. Khair ElZarrad
- Cyrus Espinoza
- Andre Esteva
- Livia Faes
- Lavinia Ferrante di Ruffano
- John Fletcher
- Robert Golub
- Hugh Harvey
- Charlotte Haug
- Christopher Holmes
- Adrian Jonas
- Pearse A. Keane
- Christopher Kelly
- Aaron Lee
- Cecilia S Lee
- Elaine Manna
- James Matcham
- Melissa D. McCradden
- João Monteiro
- Cynthia D. Mulrow
- Luke Oakden‐Rayner
- Dina N. Paltoo
- Maria Beatrice Panico
- Gary Price
- Samuel Rowley
- Richard S. Savage
- Rupa Sarkar
- Sebastian J. Vollmer
- Christopher Yau
Institutionen
- Moorfields Eye Hospital NHS Foundation Trust(GB)
- University College London(GB)
- Moorfields Eye Hospital(GB)
- University Hospitals Birmingham NHS Foundation Trust(GB)
- University of Birmingham(GB)
- Health Data Research UK(GB)
- NIHR Birmingham Biomedical Research Centre(GB)
- University of Ottawa(CA)
- Ottawa Hospital(CA)
- Ottawa Hospital Research Institute
- National Institute for Health Research(GB)