Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Demystifying Small Language Models (SLMs): The Future of Generative Artificial Intelligence
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Small Language Models (SLMs) are emerging as essential in artificial intelligence (AI), particularly in generative AI and natural language processing (NLP), with optimum efficiency, accessibility, and performance. In generative AI, these models play a crucial role by enabling the creation of high-quality text, code, and content creation (viz., poetry, stories, etc.). While Large Language Models (LLMs) have demonstrated remarkable capabilities in generating human-like language, they are often limited by high computational costs, significant memory necessities, and energy inefficiency. These limitations make LLMs impractical for deployment on edge devices or in low-resource environments, creating a demand for SLMs that provide similar functionalities with optimized efficiency and reduced resource consumption. Unlike LLMs, SLMs are designed to operate with lower computational requirements, fewer parameters, and minimal memory usage, making them ideal for deployment on resource-constrained devices, namely smartphones, IoT devices, and tiny embedded systems. Thus, this paper provides a beginner's guide to SLMs, exploring their core concepts, advantages, and applications.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.466 Zit.
Deep Learning: Methods and Applications
2014 · 3.321 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.915 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.387 Zit.