Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Cloud-native and Distributed Systems for Efficient and Scalable Large Language Models -- A Research Agenda
0
Zitationen
19
Autoren
2026
Jahr
Abstract
The rapid rise of Large Language Models (LLMs) has revolutionized various artificial intelligence (AI) applications, from natural language processing to code generation. However, the computational demands of these models, particularly in training and inference, present significant challenges. Traditional systems are often unable to meet these requirements, necessitating the integration of cloud-native and distributed architectures. This paper explores the role of cloud platforms and distributed systems in supporting the scalability, efficiency, and optimization of LLMs. We discuss the complexities of LLM deployment, including data management, resource optimization, and the need for microservices, autoscaling, and hybrid cloud-edge solutions. Additionally, we examine emerging research trends, such as serverless inference, quantum computing, and federated learning, and their potential to drive the next phase of LLM innovation. The paper concludes with a roadmap for future developments, emphasizing the need for continued research, standardization, and cross-sector collaboration to sustain the growth of LLMs in both research and enterprise applications.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.466 Zit.
Deep Learning: Methods and Applications
2014 · 3.321 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.915 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.387 Zit.