Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Learning Activation Functions to Improve Deep Neural Networks
349
Zitationen
4
Autoren
2014
Jahr
Abstract
Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10 (7.51%), CIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs boson decay modes.
Ähnliche Arbeiten
Scikit-learn: Machine Learning in Python
2012 · 63.556 Zit.
The CLUSTAL_X windows interface: flexible strategies for multiple sequence alignment aided by quality analysis tools
1997 · 39.156 Zit.
Matplotlib: A 2D Graphics Environment
2007 · 38.556 Zit.
SciPy 1.0: fundamental algorithms for scientific computing in Python
2020 · 36.507 Zit.
Array programming with NumPy
2020 · 21.243 Zit.