Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Moderating the AI Revolution: Perceived threat and generative AI implementation in Vietnamese hospitals
2
Zitationen
3
Autoren
2025
Jahr
Abstract
Generative artificial intelligence (AI) has the potential to revolutionize healthcare by improving diagnostic accuracy, streamlining administrative tasks, and enhancing patient communication. However, healthcare professionals often harbor concerns about AI-related job displacement, data security, and ethical implications, creating a perceived AI threat that may impede its widespread adoption. This study integrates the unified theory of acceptance and use of technology with perceived AI threat as both a direct and moderating factor, thereby examining how threat perceptions interact with established adoption drivers in the context of healthcare in Vietnam. A cross-sectional survey was administered to 573 healthcare professionals from major hospitals across Hanoi, Vietnam. Partial least squares structural equation modeling was employed to test the proposed framework, which included performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), and perceived AI threat. The results indicate that PE, EE, and SI had significant positive effects on behavioral intention, whereas FC was not a significant predictor. Perceived AI threat demonstrated a strong negative impact on adoption intentions, particularly by moderating and weakening the positive effects of PE and SI. The model explained 79.8 % of the variance in AI adoption intention, suggesting a substantial predictive power. Overall, the findings highlight the importance of addressing existential fears regarding AI in healthcare. Interventions targeting user training, transparent communication, and regulatory support may help mitigate perceived threats and harness AI's benefits. Interventions targeting user training, transparent communication, and regulatory support may help mitigate perceived threats and harness AI's benefits. • This study extended UTAUT by incorporating perceived AI threat to model generative AI adoption. • Performance expectancy, effort expectancy and social influence significantly predicted adoption intention. • Perceived AI threat moderates the influence of performance expectancy and social influence. • The model explains 79.8% of variance in adoption intention, showing substantial predictive relevance (Q 2 _predict = .782). • Practical implications include fostering transparent communication and comprehensive training to mitigate the AI threat.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.