Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
- URL: http://arxiv.org/abs/2310.02541v1
- Date: Wed, 4 Oct 2023 02:50:34 GMT
- Title: Benign Overfitting and Grokking in ReLU Networks for XOR Cluster Data
- Authors: Zhiwei Xu, Yutong Wang, Spencer Frei, Gal Vardi, Wei Hu
- Abstract summary: Neural networks trained by gradient descent (GD) have exhibited a number of surprising generalization behaviors.
We show that both of these phenomena provably occur in two-layer ReLU networks trained by GD on XOR cluster data.
At a later training step, the network achieves near-optimal test accuracy while still fitting the random labels in the training data, exhibiting a "grokking" phenomenon.
- Score: 42.870635753205185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks trained by gradient descent (GD) have exhibited a number of
surprising generalization behaviors. First, they can achieve a perfect fit to
noisy training data and still generalize near-optimally, showing that
overfitting can sometimes be benign. Second, they can undergo a period of
classical, harmful overfitting -- achieving a perfect fit to training data with
near-random performance on test data -- before transitioning ("grokking") to
near-optimal generalization later in training. In this work, we show that both
of these phenomena provably occur in two-layer ReLU networks trained by GD on
XOR cluster data where a constant fraction of the training labels are flipped.
In this setting, we show that after the first step of GD, the network achieves
100% training accuracy, perfectly fitting the noisy labels in the training
data, but achieves near-random test accuracy. At a later training step, the
network achieves near-optimal test accuracy while still fitting the random
labels in the training data, exhibiting a "grokking" phenomenon. This provides
the first theoretical result of benign overfitting in neural network
classification when the data distribution is not linearly separable. Our proofs
rely on analyzing the feature learning process under GD, which reveals that the
network implements a non-generalizable linear classifier after one step and
gradually learns generalizable features in later steps.
Related papers
- Grokking as the Transition from Lazy to Rich Training Dynamics [35.186196991224286]
grokking occurs when the train loss of a neural network decreases much earlier than its test loss.
Key determinants of grokking are the rate of feature learning and the alignment of the initial features with the target function.
arXiv Detail & Related papers (2023-10-09T19:33:21Z) - Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for
XOR Data [24.86314525762012]
We show that ReLU CNN trained by gradient descent can achieve near Bayes-optimal accuracy.
Our result demonstrates that CNNs have a remarkable capacity to efficiently learn XOR problems, even in the presence of highly correlated features.
arXiv Detail & Related papers (2023-10-03T11:31:37Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Intersection of Parallels as an Early Stopping Criterion [64.8387564654474]
We propose a method to spot an early stopping point in the training iterations without the need for a validation set.
For a wide range of learning rates, our method, called Cosine-Distance Criterion (CDC), leads to better generalization on average than all the methods that we compare against.
arXiv Detail & Related papers (2022-08-19T19:42:41Z) - Learning from Data with Noisy Labels Using Temporal Self-Ensemble [11.245833546360386]
Deep neural networks (DNNs) have an enormous capacity to memorize noisy labels.
Current state-of-the-art methods present a co-training scheme that trains dual networks using samples associated with small losses.
We propose a simple yet effective robust training scheme that operates by training only a single network.
arXiv Detail & Related papers (2022-07-21T08:16:31Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Benign Overfitting without Linearity: Neural Network Classifiers Trained
by Gradient Descent for Noisy Linear Data [44.431266188350655]
We consider the generalization error of two-layer neural networks trained to generalize by gradient descent.
We show that neural networks exhibit benign overfitting: they can be driven to zero training error, perfectly fitting any noisy training labels, and simultaneously achieve minimax optimal test error.
In contrast to previous work on benign overfitting that require linear or kernel-based predictors, our analysis holds in a setting where both the model and learning dynamics are fundamentally nonlinear.
arXiv Detail & Related papers (2022-02-11T23:04:00Z) - Provable Generalization of SGD-trained Neural Networks of Any Width in
the Presence of Adversarial Label Noise [85.59576523297568]
We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by gradient descent.
We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution.
arXiv Detail & Related papers (2021-01-04T18:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.