Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
HARNESSING LARGE LANGUAGE MODELS FOR HIGH-PERFORMANCE COMPUTING: OPPORTUNITIES AND CHALLENGES
0
Zitationen
1
Autoren
2025
Jahr
Abstract
High-Performance Computing (HPC) is a cornerstone of scientific and engineering advancements, enabling complex computations in areas such as climate modeling, genomics, and artificial intelligence. Concurrently, Large Language Models (LLMs) have emerged as powerful AI-driven tools capable of code optimization, automation, and scientific reasoning. The integration of LLMs into HPC systems presents significant opportunities, including enhanced code generation, improved workload management, and efficient parallel execution. However, this convergence also introduces several challenges, such as high computational costs, scalability issues, memory constraints, security risks, and interpretability concerns. This paper explores the role of LLMs in HPC, discusses existing research and industrial applications, and highlights key challenges and potential solutions. Furthermore, it provides insights into recent advances in AI-powered HPC solutions and presents case studies showcasing real-world implementations. The paper concludes with future research directions, focusing on efficient LLM architectures, integration with emerging HPC technologies, and ethical considerations. The findings emphasize the need for continued innovation to make LLMs more efficient, scalable, and reliable for HPC applications.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.398 Zit.
Deep Learning: Methods and Applications
2014 · 3.306 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.900 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.335 Zit.