Visual Saliency-Guided Channel Pruning for Deep Visual Detectors in
Autonomous Driving
- URL: http://arxiv.org/abs/2303.02512v1
- Date: Sat, 4 Mar 2023 22:08:22 GMT
- Title: Visual Saliency-Guided Channel Pruning for Deep Visual Detectors in
Autonomous Driving
- Authors: Jung Im Choi and Qing Tian
- Abstract summary: Deep neural network (DNN) pruning has become a de facto component for deploying on resource-constrained devices.
We propose a novel gradient-based saliency measure for visual detection and use it to guide our channel pruning.
Experiments on the KITTI and COCO traffic datasets demonstrate our pruning method's efficacy and superiority over state-of-the-art competing approaches.
- Score: 3.236217153362305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network (DNN) pruning has become a de facto component for
deploying on resource-constrained devices since it can reduce memory
requirements and computation costs during inference. In particular, channel
pruning gained more popularity due to its structured nature and direct savings
on general hardware. However, most existing pruning approaches utilize
importance measures that are not directly related to the task utility.
Moreover, few in the literature focus on visual detection models. To fill these
gaps, we propose a novel gradient-based saliency measure for visual detection
and use it to guide our channel pruning. Experiments on the KITTI and COCO
traffic datasets demonstrate our pruning method's efficacy and superiority over
state-of-the-art competing approaches. It can even achieve better performance
with fewer parameters than the original model. Our pruning also demonstrates
great potential in handling small-scale objects.
Related papers
- Separate, Dynamic and Differentiable (SMART) Pruner for Block/Output Channel Pruning on Computer Vision Tasks [6.199556554833467]
Deep Neural Network (DNN) pruning has emerged as a key strategy to reduce model size, improve latency, and lower power consumption on accelerators.
We introduce a separate, dynamic and differentiable () pruner for block and output channel pruning.
In our experiments, the SMART pruner consistently demonstrated its superiority over existing pruning methods.
arXiv Detail & Related papers (2024-03-29T04:28:06Z) - A Fair Loss Function for Network Pruning [70.35230425589592]
We introduce the performance weighted loss function, a simple modified cross-entropy loss function that can be used to limit the introduction of biases during pruning.
Experiments using the CelebA, Fitzpatrick17k and CIFAR-10 datasets demonstrate that the proposed method is a simple and effective tool.
arXiv Detail & Related papers (2022-11-18T15:17:28Z) - Interpretations Steered Network Pruning via Amortized Inferred Saliency
Maps [85.49020931411825]
Convolutional Neural Networks (CNNs) compression is crucial to deploying these models in edge devices with limited resources.
We propose to address the channel pruning problem from a novel perspective by leveraging the interpretations of a model to steer the pruning process.
We tackle this challenge by introducing a selector model that predicts real-time smooth saliency masks for pruned models.
arXiv Detail & Related papers (2022-09-07T01:12:11Z) - ViNNPruner: Visual Interactive Pruning for Deep Learning [11.232234265070755]
Pruning techniques help to shrink deep neural networks to smaller sizes by only decreasing their performance as little as possible.
We propose ViNNPruner, a visual interactive pruning application that implements state-of-the-art pruning algorithms and the option for users to do manual pruning based on their knowledge.
arXiv Detail & Related papers (2022-05-31T12:21:38Z) - CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization [61.71504948770445]
We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
arXiv Detail & Related papers (2021-10-21T06:26:31Z) - Anchor Pruning for Object Detection [6.900480687179143]
This paper proposes anchor pruning for object detection in one-stage anchor-based detectors.
We show that many anchors in the object detection head can be removed without any loss in accuracy.
With additional retraining, anchor pruning can even lead to improved accuracy.
arXiv Detail & Related papers (2021-04-01T12:33:16Z) - Network Pruning via Resource Reallocation [75.85066435085595]
We propose a simple yet effective channel pruning technique, termed network Pruning via rEsource rEalLocation (PEEL)
PEEL first constructs a predefined backbone and then conducts resource reallocation on it to shift parameters from less informative layers to more important layers in one round.
Experimental results show that structures uncovered by PEEL exhibit competitive performance with state-of-the-art pruning algorithms under various pruning settings.
arXiv Detail & Related papers (2021-03-02T16:28:10Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator
Search [55.164053971213576]
convolutional neural network has achieved great success in fulfilling computer vision tasks despite large computation overhead.
Structured (channel) pruning is usually applied to reduce the model redundancy while preserving the network structure.
Existing structured pruning methods require hand-crafted rules which may lead to tremendous pruning space.
arXiv Detail & Related papers (2020-11-04T07:43:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.