Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Development and validation of an AI use scale for sport and exercise science students
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is rapidly transforming sport and exercise domains. Yet, sport-science curricula have lagged in integrating AI literacy, and little is known about students’ knowledge, ethical practices, and perceptions regarding AI. Existing measurement tools are often generic and ill-suited to sport education contexts. This study aimed to develop and validate a concise, domain-specific questionnaire to assess sport students’ AI use. A systematic instrument development process was used, including literature review, expert consultation, item generation, and psychometric validation. The resulting 14-item tool covers four domains: AI Awareness, Ethics & Disclosure, Trust & Verification, and Course & Institution Expectations. The instrument was administered to 864 undergraduate sport-science students in China and analysed using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) on separate training and test sets. EFA supported a four-factor structure with high sampling adequacy (KMO = 0.95) and strong communalities. CFA confirmed good model fit (CFI = 1.00, TLI = 0.99, RMSEA = 0.09, SRMR = 0.05). Subscales demonstrated excellent internal consistency (Cronbach’s α = 0.90–0.94; McDonald’s ω = 0.90–0.94), convergent validity (AVE = 0.77–0.87), and discriminant validity (HTMT ratios < 0.85). This validated, context-specific instrument provides educators with a reliable tool to assess and enhance AI literacy in sport education. The findings support the integration of targeted AI training, ethical instruction, and institutional policies to prepare students for responsible, real-world AI engagement in sport and health domains.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.