Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Engagement strategies in human-written and AI-generated academic essays: A corpus-based study
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Based on an appraisal theory framework, this corpus-based study explores the use and functions of engagement strategies in human-written and AI-generated academic essays. A total of 80 essays (40 human-written from the LOCNESS corpus, which includes essays written by university-level native English writers, and 40 AI-generated by ChatGPT) were analysed. A mixed-methods approach was employed, involving both quantitative (including chi-square tests) and qualitative analyses of Expansion and Contraction strategies. Analysis shows that both Expansion and Contraction strategies occur more significantly in human-written texts than in AI-generated texts. Native English writers utilise a more significant proportion of Entertain markers, with a sensitive regard for alternative standpoints, and utilise Disclaim markers in actively opposing counterarguments. AI-generated texts, in contrast, utilise a high proportion of objective citing and hedging, with little objective use of strong Proclaim markers and a virtual lack of Concur dialogistic positions. There is a striking contrast in engagement functions, with humans utilising a more significant proportion of complex rhetoric and more profound argumentation supported through statistical analysis. The findings provide implications for educators and writing instructors aiming to enhance students’ argumentative skills and for developers of AI writing tools seeking to improve rhetorical complexity and engagement in generated texts. • Human-written essays use more engagement strategies to enhance argumentation. • AI-generated texts rely greatly on hedges while lacking rhetorical complexity. • Human-written texts employ contraction strategies, effectively refuting counterarguments. • AI-generated essays used weaker assertion markers, which reduced persuasiveness.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.