Peeking Behind the Curtains of Residual Learning
- URL: http://arxiv.org/abs/2402.08645v1
- Date: Tue, 13 Feb 2024 18:24:10 GMT
- Title: Peeking Behind the Curtains of Residual Learning
- Authors: Tunhou Zhang, Feng Yan, Hai Li, Yiran Chen
- Abstract summary: "The Plain Neural Net Hypothesis" (PNNH) identifies the internal path across non-linear layers as the most critical part in residual learning.
We thoroughly evaluate PNNH-enabled CNN architectures and Transformers on popular vision benchmarks, showing on-par accuracy, up to 0.3% higher training throughput, and 2x better parameter efficiency compared to ResNets and vision Transformers.
- Score: 10.915277646160707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The utilization of residual learning has become widespread in deep and
scalable neural nets. However, the fundamental principles that contribute to
the success of residual learning remain elusive, thus hindering effective
training of plain nets with depth scalability. In this paper, we peek behind
the curtains of residual learning by uncovering the "dissipating inputs"
phenomenon that leads to convergence failure in plain neural nets: the input is
gradually compromised through plain layers due to non-linearities, resulting in
challenges of learning feature representations. We theoretically demonstrate
how plain neural nets degenerate the input to random noise and emphasize the
significance of a residual connection that maintains a better lower bound of
surviving neurons as a solution. With our theoretical discoveries, we propose
"The Plain Neural Net Hypothesis" (PNNH) that identifies the internal path
across non-linear layers as the most critical part in residual learning, and
establishes a paradigm to support the training of deep plain neural nets devoid
of residual connections. We thoroughly evaluate PNNH-enabled CNN architectures
and Transformers on popular vision benchmarks, showing on-par accuracy, up to
0.3% higher training throughput, and 2x better parameter efficiency compared to
ResNets and vision Transformers.
Related papers
- Confident magnitude-based neural network pruning [0.0]
Pruning neural networks has proven to be a successful approach to increase the efficiency and reduce the memory storage of deep learning models.
We leverage recent techniques on distribution-free uncertainty quantification to provide finite-sample statistical guarantees to compress deep neural networks.
This work presents experiments in computer vision tasks to illustrate how uncertainty-aware pruning is a useful approach to deploy sparse neural networks safely.
arXiv Detail & Related papers (2024-08-08T21:29:20Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Global quantitative robustness of regression feed-forward neural
networks [0.0]
We adapt the notion of the regression breakdown point to regression neural networks.
We compare the performance, measured by the out-of-sample loss, by a proxy of the breakdown rate.
The results indeed motivate to use robust loss functions for neural network training.
arXiv Detail & Related papers (2022-11-18T09:57:53Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Over-parametrized neural networks as under-determined linear systems [31.69089186688224]
We show that it is unsurprising simple neural networks can achieve zero training loss.
We show that kernels typically associated with the ReLU activation function have fundamental flaws.
We propose new activation functions that avoid the pitfalls of ReLU in that they admit zero training loss solutions for any set of distinct data points.
arXiv Detail & Related papers (2020-10-29T21:43:00Z) - Implicit recurrent networks: A novel approach to stationary input
processing with recurrent neural networks in deep learning [0.0]
In this work, we introduce and test a novel implementation of recurrent neural networks into deep learning.
We provide an algorithm which implements the backpropagation algorithm on a implicit implementation of recurrent networks.
A single-layer implicit recurrent network is able to solve the XOR problem, while a feed-forward network with monotonically increasing activation function fails at this task.
arXiv Detail & Related papers (2020-10-20T18:55:32Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.