OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 17:38

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparing AI-Generated Responses: A Study on ChatGPT, Gemini, and Copilot in Education

2025·1 Zitationen·Journal of Educational Technology Systems
Volltext beim Verlag öffnen

1

Zitationen

2

Autoren

2025

Jahr

Abstract

This study evaluates the performance of three leading AI chatbots—OpenAI’s ChatGPT, Google’s Gemini, and Microsoft Bing Copilot—in answering multiplechoice questions (MCQs) from the UGC-NET Education paper. Using 150 randomly selected questions from examination cycles between June 2019 and December 2023, the chatbots’ accuracy was assessed against the official answer key. Copilot demonstrated the highest accuracy (86%), followed by Gemini (79.33%) and ChatGPT (78.67%). Unit-wise analysis revealed distinct strengths: Copilot excelled in “Pedagogy and Technology in Education,” Gemini performed best in “Research in Education,” while ChatGPT demonstrated a balanced performance. Chi-square analysis indicated no statistically significant differences among the chatbots. These findings highlight AI’s potential as a supplementary educational tool while underscoring the need for improvements in handling complex topics. The study offers recommendations for enhancing chatbot algorithms to improve their effectiveness in academic contexts, providing valuable insights for educators and developers regarding AI integration in education.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen