Leveraging Structured Pruning of Convolutional Neural Networks
- URL: http://arxiv.org/abs/2206.06247v1
- Date: Mon, 13 Jun 2022 15:29:12 GMT
- Title: Leveraging Structured Pruning of Convolutional Neural Networks
- Authors: Hugo Tessier, Vincent Gripon, Mathieu L\'eonardon, Matthieu Arzel,
David Bertrand, Thomas Hannagan
- Abstract summary: We propose a method that is able to take any structured pruning mask and generate a network that does not encounter any of these problems.
We show results of gains, in energy consumption and inference time on embedded hardware, of pruned convolutional neural networks.
- Score: 2.2320512724449233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structured pruning is a popular method to reduce the cost of convolutional
neural networks, that are the state of the art in many computer vision tasks.
However, depending on the architecture, pruning introduces dimensional
discrepancies which prevent the actual reduction of pruned networks. To tackle
this problem, we propose a method that is able to take any structured pruning
mask and generate a network that does not encounter any of these problems and
can be leveraged efficiently. We provide an accurate description of our
solution and show results of gains, in energy consumption and inference time on
embedded hardware, of pruned convolutional neural networks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - SieveNet: Selecting Point-Based Features for Mesh Networks [41.74190660234404]
Meshes are widely used in 3D computer vision and graphics, but their irregular topology poses challenges in applying them to existing neural network architectures.
Recent advances in mesh neural networks turn to remeshing and push the boundary of pioneer methods that solely take the raw meshes as input.
We propose SieveNet, a novel paradigm that takes into account both the regular topology and the exact geometry.
arXiv Detail & Related papers (2023-08-24T03:40:16Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - ResNet Structure Simplification with the Convolutional Kernel Redundancy
Measure [3.8637285238278434]
We propose a quantifiable evaluation method, the convolutional kernel redundancy measure, for guiding the network structure simplification.
Our method can maintain the performance of the network and reduce the number of parameters from over $23$ million to approximately $128$ thousand.
arXiv Detail & Related papers (2022-12-01T04:29:28Z) - Dimensionality Reduction in Deep Learning via Kronecker Multi-layer
Architectures [4.836352379142503]
We propose a new deep learning architecture based on fast matrix multiplication of a Kronecker product decomposition.
We show that this architecture allows a neural network to be trained and implemented with a significant reduction in computational time and resources.
arXiv Detail & Related papers (2022-04-08T19:54:52Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - Structured Convolutions for Efficient Neural Network Design [65.36569572213027]
We tackle model efficiency by exploiting redundancy in the textitimplicit structure of the building blocks of convolutional neural networks.
We show how this decomposition can be applied to 2D and 3D kernels as well as the fully-connected layers.
arXiv Detail & Related papers (2020-08-06T04:38:38Z) - Weight Pruning via Adaptive Sparsity Loss [31.978830843036658]
Pruning neural networks has regained interest in recent years as a means to compress state-of-the-art deep neural networks.
We propose a robust learning framework that efficiently prunes network parameters during training with minimal computational overhead.
arXiv Detail & Related papers (2020-06-04T10:55:16Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z) - Knapsack Pruning with Inner Distillation [11.04321604965426]
We propose a novel pruning method that optimize the final accuracy of the pruned network.
We prune the network channels while maintaining the high-level structure of the network.
Our method leads to state-of-the-art pruning results on ImageNet, CIFAR-10 and CIFAR-100 using ResNet backbones.
arXiv Detail & Related papers (2020-02-19T16:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.