ViNNPruner: Visual Interactive Pruning for Deep Learning
- URL: http://arxiv.org/abs/2205.15731v1
- Date: Tue, 31 May 2022 12:21:38 GMT
- Title: ViNNPruner: Visual Interactive Pruning for Deep Learning
- Authors: Udo Schlegel, Samuel Schiegg, Daniel A. Keim
- Abstract summary: Pruning techniques help to shrink deep neural networks to smaller sizes by only decreasing their performance as little as possible.
We propose ViNNPruner, a visual interactive pruning application that implements state-of-the-art pruning algorithms and the option for users to do manual pruning based on their knowledge.
- Score: 11.232234265070755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks grow vastly in size to tackle more sophisticated tasks. In
many cases, such large networks are not deployable on particular hardware and
need to be reduced in size. Pruning techniques help to shrink deep neural
networks to smaller sizes by only decreasing their performance as little as
possible. However, such pruning algorithms are often hard to understand by
applying them and do not include domain knowledge which can potentially be bad
for user goals. We propose ViNNPruner, a visual interactive pruning application
that implements state-of-the-art pruning algorithms and the option for users to
do manual pruning based on their knowledge. We show how the application
facilitates gaining insights into automatic pruning algorithms and
semi-automatically pruning oversized networks to make them more efficient using
interactive visualizations.
Related papers
- NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer [5.373015313199385]
We propose an eNtropy-basEd Pruning as a nEural Network depTH's rEducer to alleviate deep neural networks' computational burden.
We validate our approach on popular architectures such as MobileNet and Swin-T.
arXiv Detail & Related papers (2024-04-24T09:12:04Z) - Can Unstructured Pruning Reduce the Depth in Deep Neural Networks? [5.869633234882029]
Pruning is a widely used technique for reducing the size of deep neural networks while maintaining their performance.
In this study, we introduce EGP, an innovative Entropy Guided Pruning algorithm aimed at reducing the size of deep neural networks while preserving their performance.
arXiv Detail & Related papers (2023-08-12T17:27:49Z) - Visual Saliency-Guided Channel Pruning for Deep Visual Detectors in
Autonomous Driving [3.236217153362305]
Deep neural network (DNN) pruning has become a de facto component for deploying on resource-constrained devices.
We propose a novel gradient-based saliency measure for visual detection and use it to guide our channel pruning.
Experiments on the KITTI and COCO traffic datasets demonstrate our pruning method's efficacy and superiority over state-of-the-art competing approaches.
arXiv Detail & Related papers (2023-03-04T22:08:22Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Neural Network Pruning Through Constrained Reinforcement Learning [3.2880869992413246]
We propose a general methodology for pruning neural networks.
Our proposed methodology can prune neural networks to respect pre-defined computational budgets.
We prove the effectiveness of our approach via comparison with state-of-the-art methods on standard image classification datasets.
arXiv Detail & Related papers (2021-10-16T11:57:38Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - Neural Pruning via Growing Regularization [82.9322109208353]
We extend regularization to tackle two central problems of pruning: pruning schedule and weight importance scoring.
Specifically, we propose an L2 regularization variant with rising penalty factors and show it can bring significant accuracy gains.
The proposed algorithms are easy to implement and scalable to large datasets and networks in both structured and unstructured pruning.
arXiv Detail & Related papers (2020-12-16T20:16:28Z) - AttendNets: Tiny Deep Image Recognition Neural Networks for the Edge via
Visual Attention Condensers [81.17461895644003]
We introduce AttendNets, low-precision, highly compact deep neural networks tailored for on-device image recognition.
AttendNets possess deep self-attention architectures based on visual attention condensers.
Results show AttendNets have significantly lower architectural and computational complexity when compared to several deep neural networks.
arXiv Detail & Related papers (2020-09-30T01:53:17Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - A Framework for Neural Network Pruning Using Gibbs Distributions [34.0576955010317]
Gibbs pruning is a novel framework for expressing and designing neural network pruning methods.
It can train and prune a network simultaneously in such a way that the learned weights and pruning mask are well-adapted for each other.
We achieve a new state-of-the-art result for pruning ResNet-56 with the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-06-08T23:04:53Z) - Robust Pruning at Initialization [61.30574156442608]
A growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources.
For Deep NNs, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned.
arXiv Detail & Related papers (2020-02-19T17:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.