Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
UK reporting radiographers’ perceptions of AI in radiographic image interpretation – Current perspectives and future developments
42
Zitationen
12
Autoren
2022
Jahr
Abstract
INTRODUCTION: Radiographer reporting is accepted practice in the UK. With a national shortage of radiographers and radiologists, artificial intelligence (AI) support in reporting may help minimise the backlog of unreported images. Modern AI is not well understood by human end-users. This may have ethical implications and impact human trust in these systems, due to over- and under-reliance. This study investigates the perceptions of reporting radiographers about AI, gathers information to explain how they may interact with AI in future and identifies features perceived as necessary for appropriate trust in these systems. METHODS: A Qualtrics® survey was designed and piloted by a team of UK AI expert radiographers. This paper reports the third part of the survey, open to reporting radiographers only. RESULTS: 86 responses were received. Respondents were confident in how an AI reached its decision (n = 53, 62%). Less than a third of respondents would be confident communicating the AI decision to stakeholders. Affirmation from AI would improve confidence (n = 49, 57%) and disagreement would make respondents seek a second opinion (n = 60, 70%). There is a moderate trust level in AI for image interpretation. System performance data and AI visual explanations would increase trust. CONCLUSIONS: Responses indicate that AI will have a strong impact on reporting radiographers' decision making in the future. Respondents are confident in how an AI makes decisions but less confident explaining this to others. Trust levels could be improved with explainable AI solutions. IMPLICATIONS FOR PRACTICE: This survey clarifies UK reporting radiographers' perceptions of AI, used for image interpretation, highlighting key issues with AI integration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Autoren
Institutionen
- University of Ulster(GB)
- Royal College of Radiologists(GB)
- St Thomas' Hospital(GB)
- King's College London(GB)
- City, University of London(GB)
- Canterbury Christ Church University(GB)
- Royal London Hospital(GB)
- The London College(GB)
- University College London(GB)
- Churchill Hospital(GB)
- University of Oxford(GB)
- CRUK/MRC Oxford Institute for Radiation Oncology(GB)
- Oxford University Hospitals NHS Trust(GB)
- Leeds Teaching Hospitals NHS Trust(GB)