OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.04.2026, 11:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Machine Learning Framework for Evaluating Educational Validity and Academic Integrity in AI-Assisted Student Writing

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

The widespread use of generative artificial intelligence (AI) tools such as ChatGPT, GrammarlyGO, and Quillbot across education sectors has challenged the authenticity of student writing and the validity of academic assessment. Current evaluation systems largely depend on opaque plagiarism detectors or automated scoring models that lack interpretability. This study proposes an explainable machine learning (ML) framework integrating Random Forest, fine-tuned BERT, and SHapley Additive Explanations (SHAP) to distinguish AI-assisted from self-written essays transparently. A dataset of 960 essays from students across different education levels was analyzed. The proposed framework achieved 91.8% accuracy (F1 = 0.895) with Random Forest and 90.2% accuracy (F1 = 0.885) with BERT. SHAP analysis revealed that critical thinking depth and semantic originality were the most influential dimensions differentiating AI-generated from human writing. The results indicate that AI tools enhance linguistic fluency yet reduce reasoning depth and creative variation. This study contributes a transparent, interpretable, and pedagogically aligned approach for detecting AI involvement in education. The framework provides practical guidance for educators and policymakers to ensure responsible AI integration while safeguarding educational validity and academic integrity

Ähnliche Arbeiten