OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 10:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

GPT vs. Open-Source LLMs: A Comprehensive Performance and Capability Assessment

2025·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

The increasing demand for use of large language models (LLMs), primarily for text generation and Question-Answer jobs, has created an urgent need to evaluate their performance suiting varied roles. In any Natural Language Processing (NLP) advancement, selecting the appropriate model is yet a cumbersome job. While there seem to be proprietary LLMs available that cater to this, however, there is a lack of detailed comparisons that could guide the best choice. This study examines the performance of three prominent open-source language modelsGPT-2 Small, T5 Small, and DistilBERTin the text completion task. The goal is to ascertain which of the three alternatives is most appropriate for this task. The Wikitext-2 dataset was employed to enhance the models, ensuring uniform training and testing conditions. Metrics such as accuracy, precision, recall, F1-score, BLEU, ROUGE, and perplexity were utilized to assess performance within a comprehensive evaluation framework. An extensive assessment of the model's efficacy and quality was achieved by analyzing memory usage, processing duration, and output variability. A standardized hardware setup was employed for the studies to ensure equity and repeatability. This study aims to elucidate the trade-offs between the quality of text generation and computational efficiency in the selection of the optimal open-source model for text completion tasks. Keywords – Computational Efficiency, Evaluation Metrics, Language Models, Text Completion, Wikitext-2 Dataset

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingArtificial Intelligence in Healthcare and EducationText Readability and Simplification
Volltext beim Verlag öffnen