SASL: Saliency-Adaptive Sparsity Learning for Neural Network
Acceleration
- URL: http://arxiv.org/abs/2003.05891v3
- Date: Thu, 30 Jul 2020 02:40:13 GMT
- Title: SASL: Saliency-Adaptive Sparsity Learning for Neural Network
Acceleration
- Authors: Jun Shi, Jianfeng Xu, Kazuyuki Tasaka, Zhibo Chen
- Abstract summary: We propose a Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization.
Our method can reduce 49.7% FLOPs of ResNet-50 with very negligible 0.39% top-1 and 0.05% top-5 accuracy degradation.
- Score: 20.92912642901645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accelerating the inference speed of CNNs is critical to their deployment in
real-world applications. Among all the pruning approaches, those implementing a
sparsity learning framework have shown to be effective as they learn and prune
the models in an end-to-end data-driven manner. However, these works impose the
same sparsity regularization on all filters indiscriminately, which can hardly
result in an optimal structure-sparse network. In this paper, we propose a
Saliency-Adaptive Sparsity Learning (SASL) approach for further optimization. A
novel and effective estimation of each filter, i.e., saliency, is designed,
which is measured from two aspects: the importance for the prediction
performance and the consumed computational resources. During sparsity learning,
the regularization strength is adjusted according to the saliency, so our
optimized format can better preserve the prediction performance while zeroing
out more computation-heavy filters. The calculation for saliency introduces
minimum overhead to the training process, which means our SASL is very
efficient. During the pruning phase, in order to optimize the proposed
data-dependent criterion, a hard sample mining strategy is utilized, which
shows higher effectiveness and efficiency. Extensive experiments demonstrate
the superior performance of our method. Notably, on ILSVRC-2012 dataset, our
approach can reduce 49.7% FLOPs of ResNet-50 with very negligible 0.39% top-1
and 0.05% top-5 accuracy degradation.
Related papers
- Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models [43.26028399395612]
We propose a Fisher Information-based Efficient Curriculum Federated Learning framework (FibecFed) with two novel methods.
First, we propose a fisher information-based method to adaptively sample data within each device to improve the effectiveness of the FL fine-tuning process.
Second, we dynamically select the proper layers for global aggregation and sparse parameters for local update with LoRA.
arXiv Detail & Related papers (2024-09-30T18:12:18Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - From Data Deluge to Data Curation: A Filtering-WoRA Paradigm for Efficient Text-based Person Search [19.070305201045954]
In text-based person search endeavors, data generation has emerged as a prevailing practice, addressing concerns over privacy preservation and the arduous task of manual annotation.
We observe that only a subset of the data in constructed datasets plays a decisive role.
We introduce a new Filtering-WoRA paradigm, which contains a filtering algorithm to identify this crucial data subset and WoRA learning strategy for light fine-tuning.
arXiv Detail & Related papers (2024-04-16T05:29:14Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics [9.741019160068388]
We introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics.
Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives.
arXiv Detail & Related papers (2023-04-28T10:31:12Z) - Towards Compute-Optimal Transfer Learning [82.88829463290041]
We argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance.
Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.
arXiv Detail & Related papers (2023-04-25T21:49:09Z) - Efficient Sharpness-aware Minimization for Improved Training of Neural
Networks [146.2011175973769]
This paper proposes Efficient Sharpness Aware Minimizer (M) which boosts SAM s efficiency at no cost to its generalization performance.
M includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection.
We show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis bases.
arXiv Detail & Related papers (2021-10-07T02:20:37Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Generalized Reinforcement Meta Learning for Few-Shot Optimization [3.7675996866306845]
We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
Our framework could be easily extended to do network architecture search.
arXiv Detail & Related papers (2020-05-04T03:21:05Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.