Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Stakeholder Perceptions of Challenges and Benefits of AI in Diagnostic Imaging: A Systematic Thematic Exploration within the NHS
0
Zitationen
1
Autoren
2025
Jahr
Abstract
This study critically investigates NHS stakeholder perceptions of artificial intelligence (AI) adoption within diagnostic imaging, exposing the socio-technical, ethical, and institutional tensions that underpin implementation challenges. Despite sustained policy investment, much of the extant literature remains techno-centric—overlooking the epistemic concerns, professional disempowerment, and legitimacy anxieties of frontline radiographers, radiologists, patients, and healthcare leaders. To address this gap, the study adopts a theory-informed secondary qualitative synthesis, integrating the Technology Acceptance Model (TAM) and Stakeholder Theory to interrogate how acceptance, trust, and governance perceptions shape AI readiness. Thirteen UK-based empirical studies (2020–2025) were selected through a PRISMA-guided protocol and analysed using Braun and Clarke’s six-phase reflexive thematic analysis. Seven analytically distinct themes emerged: Perceived Benefits of AI; Trust, Explainability, and Human-AI Collaboration; Governance, Ethical, and Safety Barriers; Workforce Readiness and Education Gaps; Equity, Inclusivity, and Bias Risks; Stakeholder Engagement and Co-Production; and Sustainability, Funding, and Public Trust. Findings reveal that trust in AI is not reducible to system accuracy or explainability, but shaped by power asymmetries, legitimacy deficits, and a lack of structured co-production. Educational gaps, governance ambiguities, and algorithmic bias further exacerbate stakeholder misalignment. Although reliant on secondary data, the study compensates through methodological rigour and conceptual triangulation. This study offers a novel theoretical and empirical contribution by mapping stakeholder-specific tensions and advancing a multidimensional framework for ethically aligned AI governance. It concludes that responsible AI integration in NHS diagnostic imaging depends not solely on technical innovation, but on participatory design, equitable stakeholder inclusion, and institutional trust-building across all levels of the health system.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.