Learning Robust and Lightweight Model through Separable Structured
Transformations
- URL: http://arxiv.org/abs/2112.13551v2
- Date: Wed, 29 Dec 2021 02:25:38 GMT
- Title: Learning Robust and Lightweight Model through Separable Structured
Transformations
- Authors: Xian Wei, Yanhui Huang, Yangyu Xu, Mingsong Chen, Hai Lan, Yuanxiang
Li, Zhongfeng Wang and Xuan Tang
- Abstract summary: We propose a separable structural transformation of the fully-connected layer to reduce the parameters of convolutional neural networks.
We successfully reduce the amount of network parameters by 90%, while the robust accuracy loss is less than 1.5%.
We evaluate the proposed approach on datasets such as ImageNet, SVHN, CIFAR-100 and Vision Transformer.
- Score: 13.208781763887947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the proliferation of mobile devices and the Internet of Things, deep
learning models are increasingly deployed on devices with limited computing
resources and memory, and are exposed to the threat of adversarial noise.
Learning deep models with both lightweight and robustness is necessary for
these equipments. However, current deep learning solutions are difficult to
learn a model that possesses these two properties without degrading one or the
other. As is well known, the fully-connected layers contribute most of the
parameters of convolutional neural networks. We perform a separable structural
transformation of the fully-connected layer to reduce the parameters, where the
large-scale weight matrix of the fully-connected layer is decoupled by the
tensor product of several separable small-sized matrices. Note that data, such
as images, no longer need to be flattened before being fed to the
fully-connected layer, retaining the valuable spatial geometric information of
the data. Moreover, in order to further enhance both lightweight and
robustness, we propose a joint constraint of sparsity and differentiable
condition number, which is imposed on these separable matrices. We evaluate the
proposed approach on MLP, VGG-16 and Vision Transformer. The experimental
results on datasets such as ImageNet, SVHN, CIFAR-100 and CIFAR10 show that we
successfully reduce the amount of network parameters by 90%, while the robust
accuracy loss is less than 1.5%, which is better than the SOTA methods based on
the original fully-connected layer. Interestingly, it can achieve an
overwhelming advantage even at a high compression rate, e.g., 200 times.
Related papers
- LiteNeXt: A Novel Lightweight ConvMixer-based Model with Self-embedding Representation Parallel for Medical Image Segmentation [2.0901574458380403]
We propose a new lightweight but efficient model, namely LiteNeXt, for medical image segmentation.
LiteNeXt is trained from scratch with small amount of parameters (0.71M) and Giga Floating Point Operations Per Second (0.42).
arXiv Detail & Related papers (2024-04-04T01:59:19Z) - Efficient Compression of Overparameterized Deep Models through
Low-Dimensional Learning Dynamics [10.673414267895355]
We present a novel approach for compressing over parameterized models.
Our algorithm improves the training efficiency by more than 2x, without compromising generalization.
arXiv Detail & Related papers (2023-11-08T23:57:03Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - BiViT: Extremely Compressed Binary Vision Transformer [19.985314022860432]
We propose to solve two fundamental challenges to push the horizon of Binary Vision Transformers (BiViT)
We propose Softmax-aware Binarization, which dynamically adapts to the data distribution and reduces the error caused by binarization.
Our method performs favorably against state-of-the-arts by 19.8% on the TinyImageNet dataset.
arXiv Detail & Related papers (2022-11-14T03:36:38Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Basis Scaling and Double Pruning for Efficient Inference in
Network-Based Transfer Learning [1.3467579878240454]
We decompose a convolutional layer into two layers: a convolutional layer with the orthonormal basis vectors as the filters, and a "BasisScalingConv" layer which is responsible for rescaling the features.
We can achieve pruning ratios up to 74.6% for CIFAR-10 and 98.9% for MNIST in model parameters.
arXiv Detail & Related papers (2021-08-06T00:04:02Z) - Compact CNN Structure Learning by Knowledge Distillation [34.36242082055978]
We propose a framework that leverages knowledge distillation along with customizable block-wise optimization to learn a lightweight CNN structure.
Our method results in a state of the art network compression while being capable of achieving better inference accuracy.
In particular, for the already compact network MobileNet_v2, our method offers up to 2x and 5.2x better model compression.
arXiv Detail & Related papers (2021-04-19T10:34:22Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.