Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Strategic Approach for Enhancing Deep Learning Models
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Modern large language models have achieved remarkable growth and performance across domains, yet their intense use of resources and high computational costs present challenges to scalability and sustainability. Current attempts to surpass baseline (naïve) AutoDL (Automated Deep Learning) models often rely on complex manipulations to yield marginal accuracy gains while demanding deep domain knowledge and computational intensity. To address known inefficiencies in computation and implementation, this study proposes a strategic approach for enhancing processing without compromising model accuracy or performance through a simplified, scalable methodology. We present a novel AutoDL weight optimization model method that analyzes the most accurate deep learning starting point and achieves the highest outcomes while considering the additional “presetting” analysis overhead. Using 20 real-world datasets, we conducted experiments across three models, six weight configurations, and ten seeds, totaling 62,400 epochs. In all experiments, the optimized model outperformed baselines, achieving higher accuracy across every dataset while reducing preprocessing to only two epochs per seed. These results demonstrate that minimal preprocessing—limited to two epochs per seed—can substantially lower computational demand while maintaining precision. As the global demand for AI deployment accelerates, this conservation-oriented approach will be critical to sustaining innovation within resource and infrastructural constraints, enabling advances in computational sustainability and responsible AI development, tangible savings across multiple dimensions of resource consumption, and broader access to deep learning technologies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.578 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.470 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.984 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.