Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence–Driven Learning in Criminology Education: Students’ Perceptions of AI Presence, Acceptance and Risks
0
Zitationen
8
Autoren
2026
Jahr
Abstract
This study examined the perceived presence, acceptance, and risks of AI-driven learning among BS Criminology students, addressing the growing integration of artificial intelligence in higher education and the limited empirical evidence in criminology education. Guided by the Unified Theory of Acceptance and Use of Technology (UTAUT), this study employed a quantitative descriptive–correlational design involving 201 criminology students selected through purposive sampling. Data were gathered using a validated survey instrument and analyzed using descriptive statistics, Pearson’s correlation, and multiple linear regression, with reliability and assumption checks conducted before inferential analyses. The results indicated a high perceived AI presence in terms of performance expectancy, pedagogical support, information accuracy, and facilitating conditions, alongside moderate to high levels of acceptance reflected in behavioral intention, actual use, and satisfaction. A strong positive relationship was found between perceived AI presence and acceptance (r = .83, p < .001). Regression analysis identified pedagogical support and facilitating conditions as the strongest predictors of acceptance, jointly explaining 78.4% of the variance. Students also demonstrated high awareness of the risks related to misinformation, overreliance, and academic integrity. The findings affirm the applicability of the UTAUT framework in criminology education and highlight that effective AI adoption depends on guided pedagogical integration, institutional support, and ethical safeguards. The study concluded that AI should function as a supplementary learning tool supported by structured instructional strategies and clear ethical guidelines.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.560 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.451 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.