Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring reflective vs. Instructional AI feedback on students' abstract writing
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Research topic/aim <br/><br/>This study investigates how different types of AI chatbot feedback (reflective vs. instructional) affect graduate students’ learning and revision processes during an abstract writing exercise. Conducted in a Media Studies graduate course, it aims to understand whether reflective, metacognitive questioning from a chatbot encourages deeper engagement with the writing process compared to direct instructional suggestions. <br/><br/>Theoretical framework <br/><br/>Learning through feedback is partly shaped by tacit competences such as feedback literacy (Carless & Boud, 2018) and self-regulated learning (Zimmerman, 1990). Advances in artificial intelligence have led to discussions of when and how AI supports versus undermines students’ development of these feedback and metacognitive competences. This study explores how different types of AI feedback might support metacognition to different extents. <br/><br/>Methodology <br/><br/>In a quasi-experimental design, 25 participants were divided into two groups. One group interacted with a chatbot designed to prompt them with reflective, metacognitive questions about their abstracts (n=13). The other group used a chatbot that provided direct instructional feedback and concrete revision suggestions (n=12). After independently drafting an abstract for 15 min, students spent 25 min revising their abstract with chatbot support. Afterwards, they participated in mixed-group discussions about the experience. Data sources include chat logs, discussion group observations, and pre- and post-surveys (assessing AI literacy, feedback literacy, AI perceptions, as well as open-ended reflection questions about their experience). <br/><br/>Expected results/findings <br/><br/>Preliminary results show that the instructional group reported greater immediate improvement in self-assessed abstract quality and made more revisions. However, students in the reflective group indicated a deeper engagement with their writing process, as seen in qualitative survey responses and observations. While the instructional chatbot seemed to foster quick fixes and more revisions, the reflective chatbot encouraged more thoughtful consideration of abstract content and writing strategies. Meanwhile, the reflective group reported being frustrated with the interaction and noted that their preconceptions about chatbot interactions influenced their evaluation of the less traditional reflective chatbot. <br/><br/>Nordic relevance <br/><br/>This study examines a Nordic teaching context where feedback is viewed as a learning tool rather than merely an assessment mechanism. These findings emphasise the importance of looking beyond final product quality and the number of revisions as a measure of feedback uptake. The study highlights an instructional design in which students are exposed to different AI feedback types, aligning them with learning outcomes that strengthen reflection in writing and learning. While instructional feedback may lead to immediate improvements, reflective prompts could cultivate longer-term learning and writing skills. Further research will explore whether AIs designed for metacognitive engagement have a lasting impact on SRL and writing competencies. This study contributes to the ongoing conversation about the role of AI in education, highlighting how different feedback paradigms shape students' learning experiences. <br/><br/>References: <br/><br/>Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354 <br/><br/>Zimmerman, B. J. (1990). Self-Regulated Learning and Academic Achievement: An Overview. Educational Psychologist, 25(1), 3–17. https://doi.org/10.1207/s15326985ep2501_2 <br/><br/>
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.562 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.351 Zit.