Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human–AI Co-Orchestration in Data Science Education: Interactive, Adaptive, and Personalized Lecture Design for Diverse Learners
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Large language models such as ChatGPT are rapidly entering university classrooms, yet their role in transforming the large-lecture formats central to data science education remains under-theorized. Existing research overwhelmingly focuses on micro-level uses—automated feedback, tutoring dialogues, or static lesson-plan generation—rather than on the lecture as a dynamic space where conceptual difficulty, learner heterogeneity, and instructional pacing collide. This opinion article argues that data science, perhaps more than any other field, stands to benefit from a reconceptualization of the lecture as a site of human–AI co-orchestration , where ChatGPT assists with pre-class design, in-class adaptivity, and post-class personalization. Drawing on ICAP, self-regulated learning, cognitive load theory, and recent developments in AI-enabled personalized learning, the article proposes that LLMs can help instructors lower barriers for learners with little or no computing background, offer multiple representational pathways for complex concepts, and support real-time differentiation without fragmenting whole-class instruction. Rather than promoting AI as a surrogate teacher, the article positions ChatGPT as a generative partner that augments instructor agency and expands the possibilities for interactive, constructive, and metacognitively rich data science learning. It concludes by outlining a research and design agenda for investigating the pedagogical, ethical, and practical implications of this approach and calls for the computing education community to engage proactively in shaping responsible, theory-informed uses of LLMs in data science classrooms.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.349 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.219 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.631 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.480 Zit.