On the Generalization of Models Trained with SGD: Information-Theoretic
Bounds and Implications
- URL: http://arxiv.org/abs/2110.03128v1
- Date: Thu, 7 Oct 2021 00:53:33 GMT
- Title: On the Generalization of Models Trained with SGD: Information-Theoretic
Bounds and Implications
- Authors: Ziqiao Wang, Yongyi Mao
- Abstract summary: We present new and tighter information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD.
Experimental study based on these bounds provide some insights on the SGD training of neural networks.
- Score: 13.823089111538128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper follows up on a recent work of (Neu, 2021) and presents new and
tighter information-theoretic upper bounds for the generalization error of
machine learning models, such as neural networks, trained with SGD. We apply
these bounds to analyzing the generalization behaviour of linear and two-layer
ReLU networks. Experimental study based on these bounds provide some insights
on the SGD training of neural networks. They also point to a new and simple
regularization scheme which we show performs comparably to the current state of
the art.
Related papers
- A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Learning Dynamics and Generalization in Reinforcement Learning [59.530058000689884]
We show theoretically that temporal difference learning encourages agents to fit non-smooth components of the value function early in training.
We show that neural networks trained using temporal difference algorithms on dense reward tasks exhibit weaker generalization between states than randomly networks and gradient networks trained with policy methods.
arXiv Detail & Related papers (2022-06-05T08:49:16Z) - On the Interpretability of Regularisation for Neural Networks Through
Model Gradient Similarity [0.0]
Model Gradient Similarity (MGS) serves as a metric of regularisation.
MGS provides the basis for a new regularisation scheme which exhibits excellent performance.
arXiv Detail & Related papers (2022-05-25T10:38:33Z) - Differentiable Reasoning over Long Stories -- Assessing Systematic
Generalisation in Neural Models [12.479512369785082]
We consider two classes of neural models: "E-GNN", the graph-based models that can process graph-structured data and consider the edge attributes simultaneously; and "L-Graph", the sequence-based models which can process linearized version of the graphs.
We found that the modified recurrent neural network yield surprisingly accurate results across every systematic generalisation tasks which outperform the graph neural network.
arXiv Detail & Related papers (2022-03-20T18:34:42Z) - With Greater Distance Comes Worse Performance: On the Perspective of
Layer Utilization and Model Generalization [3.6321778403619285]
Generalization of deep neural networks remains one of the main open problems in machine learning.
Early layers generally learn representations relevant to performance on both training data and testing data.
Deeper layers only minimize training risks and fail to generalize well with testing or mislabeled data.
arXiv Detail & Related papers (2022-01-28T05:26:32Z) - On Connections between Regularizations for Improving DNN Robustness [67.28077776415724]
This paper analyzes regularization terms proposed recently for improving the adversarial robustness of deep neural networks (DNNs)
We study possible connections between several effective methods, including input-gradient regularization, Jacobian regularization, curvature regularization, and a cross-Lipschitz functional.
arXiv Detail & Related papers (2020-07-04T23:43:32Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.