Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Preserving Cognitive Ownership in Higher Education: A Sustainable Hybrid Pedagogical Framework for Reasoning-Centred AI Integration
0
Zitationen
3
Autoren
2026
Jahr
Abstract
This study explores how distinctive types of generative artificial intelligence (AI) practice, augmentation, co-construction, and replacement shape students’ reasoning skills and sense of cognitive ownership in higher education (HE) academic writing. This research also responds to growing humanitarian concerns about the erosion of student commitment, the undermining of autonomy, and ethical learning in HE. To address this core gap, an explanatory sequential mixed-methods design was employed. Data were collected from 412 UK HE students, complemented with in-depth interviews from 24 participants. Quantitative modelling showed that augmentation strengthens reasoning through reflective engagement, co-construction yields mixed cognitive outcomes, and replacement significantly weakens ownership and efficacy. Qualitative findings revealed subsistent experiences behind these practices: some students articulated no ethical harm by AI-supported reflection, while others exhibited a quiet disarticulation of their self-learning skills. Incorporating these insights, this study proposed the Hybrid Human–AI Reasoning Integrity Model (HHARIM), a sustainable pedagogical framework in HE that centres human reasoning in ethical AI use. The recommended model also highlights cognitive ownership as an essential element and outlines a robust framework for responsible AI use to safeguard learning, ethics, and autonomy in HE. This study contributes theoretically by offering HHARIM as a framework for effectively embedding AI, thereby upholding ethical, sustainable, and human-centred learning. Ultimately, the implications of this proposed model will influence HE systems to encourage sustainable AI pedagogical practices that reinforce academic writing rather than compromise students’ learning efficacy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.400 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.261 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.695 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.506 Zit.