Towards Practical Control of Singular Values of Convolutional Layers
- URL: http://arxiv.org/abs/2211.13771v1
- Date: Thu, 24 Nov 2022 19:09:44 GMT
- Title: Towards Practical Control of Singular Values of Convolutional Layers
- Authors: Alexandra Senderovich, Ekaterina Bulatova, Anton Obukhov, Maxim
Rakhuba
- Abstract summary: Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
- Score: 65.25070864775793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In general, convolutional neural networks (CNNs) are easy to train, but their
essential properties, such as generalization error and adversarial robustness,
are hard to control. Recent research demonstrated that singular values of
convolutional layers significantly affect such elusive properties and offered
several methods for controlling them. Nevertheless, these methods present an
intractable computational challenge or resort to coarse approximations. In this
paper, we offer a principled approach to alleviating constraints of the prior
art at the expense of an insignificant reduction in layer expressivity. Our
method is based on the tensor-train decomposition; it retains control over the
actual singular values of convolutional mappings while providing structurally
sparse and hardware-friendly representation. We demonstrate the improved
properties of modern CNNs with our method and analyze its impact on the model
performance, calibration, and adversarial robustness. The source code is
available at: https://github.com/WhiteTeaDragon/practical_svd_conv
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - SINDER: Repairing the Singular Defects of DINOv2 [61.98878352956125]
Vision Transformer models trained on large-scale datasets often exhibit artifacts in the patch token they extract.
We propose a novel fine-tuning smooth regularization that rectifies structural deficiencies using only a small dataset.
arXiv Detail & Related papers (2024-07-23T20:34:23Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - On the effectiveness of partial variance reduction in federated learning
with heterogeneous data [27.527995694042506]
We show that the diversity of the final classification layers across clients impedes the performance of the FedAvg algorithm.
Motivated by this, we propose to correct model by variance reduction only on the final layers.
We demonstrate that this significantly outperforms existing benchmarks at a similar or lower communication cost.
arXiv Detail & Related papers (2022-12-05T11:56:35Z) - Revisiting Sparse Convolutional Model for Visual Recognition [40.726494290922204]
This paper revisits the sparse convolutional modeling for image classification.
We show that such models have equally strong empirical performance on CIFAR-10, CIFAR-100, and ImageNet datasets.
arXiv Detail & Related papers (2022-10-24T04:29:21Z) - Counterbalancing Teacher: Regularizing Batch Normalized Models for
Robustness [15.395021925719817]
Batch normalization (BN) is a technique for training deep neural networks that accelerates their convergence to reach higher accuracy.
We show that BN incentivizes the model to rely on low-variance features that are highly specific to the training (in-domain) data.
We propose Counterbalancing Teacher (CT) to enforce the student network's learning of robust representations.
arXiv Detail & Related papers (2022-07-04T16:16:24Z) - Distributed Adversarial Training to Robustify Deep Neural Networks at
Scale [100.19539096465101]
Current deep neural networks (DNNs) are vulnerable to adversarial attacks, where adversarial perturbations to the inputs can change or manipulate classification.
To defend against such attacks, an effective approach, known as adversarial training (AT), has been shown to mitigate robust training.
We propose a large-batch adversarial training framework implemented over multiple machines.
arXiv Detail & Related papers (2022-06-13T15:39:43Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Convolutional Normalization: Improving Deep Convolutional Network
Robustness and Training [44.66478612082257]
Normalization techniques have become a basic component in modern convolutional neural networks (ConvNets)
We introduce a simple and efficient convolutional normalization'' method that can fully exploit the convolutional structure in the Fourier domain.
We show that convolutional normalization can reduce the layerwise spectral norm of the weight matrices and hence improve the Lipschitzness of the network.
arXiv Detail & Related papers (2021-03-01T00:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.