Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How AI literacy correlates with affective, behavioral, cognitive and contextual variables: A systematic review
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This systematic review maps the empirical landscape of AI literacy by examining its correlations with a diverse array of affective, behavioral, cognitive and contextual variables. Building on the review of AI literacy scales by Lintner (2024), we analyzed 31 empirical studies that applied six of those Ai literacy scales, covering 14 countries and a range of participant groups. Our findings reveal robust correlations of AI literacy with AI self-efficacy, positive AI attitudes, motivation, and digital competencies, and negative correlations with AI anxiety and negative AI attitudes. Personal factors such as age appear largely uncorrelated with AI literacy. The review reveals measurement challenges regarding AI literacy: discrepancies between self-assessment scales and performance-based tests suggest that metacognitive biases like the Dunning Kruger effect may inflate certain correlations with self-assessment AI literacy scales. Despite these challenges, the robust findings provide a solid foundation for future research. • Synthesizes 31 empirical studies that applied six AI literacy instruments in 14 countries, mapping correlations with 88 affective, behavioral, cognitive, and contextual variables • Finds consistent, medium-to-strong correlations between AI literacy and AI self-efficacy, positive AI attitudes, and digital competencies • Uncovers that self-assessment scales show systematically higher correlations compared with a performance-based test, hinting at metacognitive bias in self-reports • Studies draw mainly from university samples, especially health disciplines, pointing to the need for studies in K-12, workplaces, and under-represented fields.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.