Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Prospective Study Assessing Patient Perception of the Use Of Artificial Intelligence in Radiology
12
Zitationen
5
Autoren
2022
Jahr
Abstract
Objective: Radiology has been at the forefront of medical technology including the use of artificial intelligence (AI) and machine learning. However, there remains scant literature on the perspective of patients regarding clinical use of this technology. This study aimed to assess the opinion of radiology patients on the potential involvement of AI in their medical care. Design: A survey was given to ambulatory outpatients attending our hospital for medical imaging. The survey consisted of questions concerning comfort with radiologist reports, comfort with entirely AI reports, comfort with in-part AI reports, accuracy, data security, and medicolegal risk. Setting: Tertiary academic hospital in Melbourne, Australia. Main outcome measures: Patients’ were surveyed for their overall comfort with the use of AI in their medical imaging using a Likert scale of 0 to 7. Results: 283 patient surveys were included. Patients rated comfort in their imaging being reported by a radiologist at mean of 6.5 out of 7, compared with AI alone at mean 3.5 out of 7 (p<0.0001), or in-part AI at mean 5.4 out of 7 (p<0.0001). Patients felt AI should have an accuracy of mean 91.4% to be able to be used in a clinical environment. Patients rated their current comfort with data security at mean 5.5 out of 7 however comfort with data security using AI at mean 4.4 out of 7, p<0.0001. Conclusions: Patients are trusting of the holistic role of a radiologist however, remain uncomfortable with clinical use of AI as a standalone product including accuracy and data security. If AI technology is to evolve then it must do so with appropriate involvement of stakeholders, of which patients are paramount.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.