Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Impact of AI‐Generated Content on Decision Making for Topics Requiring Expertise
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Modelling users′ online decision making and opinion change is a complex issue that needs to consider users′ personal determinants, the nature of the topic and the information retrieval activities. Furthermore, generative AI‐based products like ChatGPT gradually become an essential element for the retrieval of online information. However, the interaction between domain‐specific knowledge and AI‐generated content during online decision making is unclear. We conducted a lab‐based explanatory sequential study with university students to overcome this research gap. In the experiment, we surveyed participants about a set of general domain topics that are easy to grasp and another set of domain‐specific topics that require adequate levels of chemical science knowledge to fully comprehend. We provided participants with decision‐supporting information that was either produced using generative AI or collected from selected expert human‐written sources to explore the role of AI‐generated content compared with ordinary information during decision making. Our results revealed that participants are less likely to change opinions on domain‐specific topics. Since participants without professional knowledge had difficulty performing in‐depth and independent reasoning based on the information, they favoured relying on conclusions presented in the provided materials and tended to stick to their initial opinion. Besides, information that is labelled as AI‐generated is equivalently helpful as information labelled as dedicatedly human‐written for participants in this experiment, indicating the vast potential as well as concerns for AI replacing human experts to help users tackle professional topics or issues.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.