The smooth output assumption, and why deep networks are better than wide
ones
- URL: http://arxiv.org/abs/2211.14347v1
- Date: Fri, 25 Nov 2022 19:05:44 GMT
- Title: The smooth output assumption, and why deep networks are better than wide
ones
- Authors: Luis Sa-Couto, Jose Miguel Ramos, Andreas Wichert
- Abstract summary: We propose a new measure that predicts how well a model will generalize.
It is based on the fact that, in reality, boundaries between concepts are generally unsharp.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When several models have similar training scores, classical model selection
heuristics follow Occam's razor and advise choosing the ones with least
capacity. Yet, modern practice with large neural networks has often led to
situations where two networks with exactly the same number of parameters score
similar on the training set, but the deeper one generalizes better to unseen
examples. With this in mind, it is well accepted that deep networks are
superior to shallow wide ones. However, theoretically there is no difference
between the two. In fact, they are both universal approximators.
In this work we propose a new unsupervised measure that predicts how well a
model will generalize. We call it the output sharpness, and it is based on the
fact that, in reality, boundaries between concepts are generally unsharp. We
test this new measure on several neural network settings, and architectures,
and show how generally strong the correlation is between our metric, and test
set performance.
Having established this measure, we give a mathematical probabilistic
argument that predicts network depth to be correlated with our proposed
measure. After verifying this in real data, we are able to formulate the key
argument of the work: output sharpness hampers generalization; deep networks
have an in built bias against it; therefore, deep networks beat wide ones.
All in all the work not only provides a helpful predictor of overfitting that
can be used in practice for model selection (or even regularization), but also
provides a much needed theoretical grounding for the success of modern deep
neural networks.
Related papers
- Relearning Forgotten Knowledge: on Forgetting, Overfit and Training-Free
Ensembles of DNNs [9.010643838773477]
We introduce a novel score for quantifying overfit, which monitors the forgetting rate of deep models on validation data.
We show that overfit can occur with and without a decrease in validation accuracy, and may be more common than previously appreciated.
We use our observations to construct a new ensemble method, based solely on the training history of a single network, which provides significant improvement without any additional cost in training time.
arXiv Detail & Related papers (2023-10-17T09:22:22Z) - Network Degeneracy as an Indicator of Training Performance: Comparing
Finite and Infinite Width Angle Predictions [3.04585143845864]
We show that as networks get deeper and deeper, they are more susceptible to becoming degenerate.
We use a simple algorithm that can accurately predict the level of degeneracy for any given fully connected ReLU network architecture.
arXiv Detail & Related papers (2023-06-02T13:02:52Z) - Feature-Learning Networks Are Consistent Across Widths At Realistic
Scales [72.27228085606147]
We study the effect of width on the dynamics of feature-learning neural networks across a variety of architectures and datasets.
Early in training, wide neural networks trained on online data have not only identical loss curves but also agree in their point-wise test predictions throughout training.
We observe, however, that ensembles of narrower networks perform worse than a single wide network.
arXiv Detail & Related papers (2023-05-28T17:09:32Z) - Wide and Deep Neural Networks Achieve Optimality for Classification [23.738242876364865]
We identify and construct an explicit set of neural network classifiers that achieve optimality.
In particular, we provide explicit activation functions that can be used to construct networks that achieve optimality.
Our results highlight the benefit of using deep networks for classification tasks, in contrast to regression tasks, where excessive depth is harmful.
arXiv Detail & Related papers (2022-04-29T14:27:42Z) - On the Compression of Natural Language Models [0.0]
We will review state-of-the-art compression techniques such as quantization, knowledge distillation, and pruning.
The goal of this work is to assess whether such a trainable subnetwork exists for natural language models (NLM)
arXiv Detail & Related papers (2021-12-13T08:14:21Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Redundant representations help generalization in wide neural networks [71.38860635025907]
We study the last hidden layer representations of various state-of-the-art convolutional neural networks.
We find that if the last hidden representation is wide enough, its neurons tend to split into groups that carry identical information, and differ from each other only by statistically independent noise.
arXiv Detail & Related papers (2021-06-07T10:18:54Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - The Low-Rank Simplicity Bias in Deep Networks [46.79964271742486]
We make a series of empirical observations that investigate and extend the hypothesis that deep networks are inductively biased to find solutions with lower effective rank embeddings.
We show that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well.
arXiv Detail & Related papers (2021-03-18T17:58:02Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.