Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Sparse Convolutional Neural Networks
742
Zitationen
5
Autoren
2015
Jahr
Abstract
Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.
Ähnliche Arbeiten
Deep Residual Learning for Image Recognition
2016 · 218.825 Zit.
U-Net: Convolutional Networks for Biomedical Image Segmentation
2015 · 87.297 Zit.
ImageNet classification with deep convolutional neural networks
2017 · 75.672 Zit.
Very Deep Convolutional Networks for Large-Scale Image Recognition
2014 · 75.502 Zit.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
2016 · 53.375 Zit.