Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Do LLMs Benefit from Self-Ensembles? A Study of Self-Mixture-of-Agents
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) have become increasingly prevalent in natural language processing applications due to their strong performance on a wide range of language tasks. However, their outputs are still prone to inconsistency and flaws, limiting their reliability. To address these limitations, we explore Mixture-of-Agents (MoA), an existing ensemble method that aggregates responses from multiple LLMs, which has been proven to enhance performance. Building upon this approach, we attempt to evaluate the effectiveness of Self-Mixture-of-Agents (Self-MoA), an innovative ensemble method that aggregates outputs only from a single top-performing LLM. This raises the central question of whether Self-MoA can yield measurable performance improvements. To evaluate the efficacy of Self-MoA, we evaluated Self-MoA across multiple models and datasets to assess its performance. Additionally, we applied supervised fine-tuning (SFT) to assess its impact on performance. Our results show that Self-MoA improves performance only on certain models. We hypothesize that the size of the model, the consistency of the model, and the purpose of the model all contribute to the effectiveness of Self-MoA. However, applying SFT yielded no improvement.