Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Public Trust in AI Healthcare Systems: A Questionnaire-based Study on Bias, Reliability, and Privacy Concerns
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The healthcare industry is now more and more adopting artificial intelligence (AI) in support of improving diagnosis, managing virtual consults, and carrying out predictive analysis. Even with the increasing use of these systems, their value is subject to public trust. This research examines social opinions on artificial intelligence in healthcare, targeting dimensions of reliability, bias, and concerns for privacy. A questionnaire survey was conducted, which collected 1,019 questionnaires from the public, representing all age groups, both genders, all educational levels, and all professions. Results indicate that 88.6% of the respondent participants made use of healthcare services that had AI interwoven in them, for instance, chatbots, cell phone applications, or internet-aided diagnosis tools. Participants had varying opinions regarding the reliability of the AI systems. Specifically, 39.9% believed them to be equally reliable as human professionals, 48.3% less reliable, and 11.8% more reliable. There was high skepticism regarding fairness in this context. While 46.5% of the interviewers believed that AI is equally fair to all human beings, 68% assumed that available data is incomplete or biased in favor of Western populations. Data and privacy protection were among the major concerns, with 68.4% worried that their health-related information would not be safeguarded appropriately. While it was also thought to be important that artificial intelligence systems should clearly demonstrate their limitation, 89% of the interviewees indicated that transparency in the matter was critical. Finally, the results show that consumers are steadily accepting artificial intelligence in the healthcare industry in a cautious way. Although users find the convenience beneficial, they lack trust in the accuracy, fairness, and protection of their confidential information. The study suggests that in building public trust, it is critical to use inclusive datasets that contain heterogeneous groups, use effective privacy controls, and have transparency. To achieve more refined insight regarding whether trust in AI health systems has increased or decreased with time, future studies should look at inter-cultural perceptions and longitudinal analyses.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.