On the Strong Correlation Between Model Invariance and Generalization
- URL: http://arxiv.org/abs/2207.07065v1
- Date: Thu, 14 Jul 2022 17:08:25 GMT
- Title: On the Strong Correlation Between Model Invariance and Generalization
- Authors: Weijian Deng, Stephen Gould, Liang Zheng
- Abstract summary: Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
- Score: 54.812786542023325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalization and invariance are two essential properties of any machine
learning model. Generalization captures a model's ability to classify unseen
data while invariance measures consistency of model predictions on
transformations of the data. Existing research suggests a positive
relationship: a model generalizing well should be invariant to certain visual
factors. Building on this qualitative implication we make two contributions.
First, we introduce effective invariance (EI), a simple and reasonable measure
of model invariance which does not rely on image labels. Given predictions on a
test image and its transformed version, EI measures how well the predictions
agree and with what level of confidence. Second, using invariance scores
computed by EI, we perform large-scale quantitative correlation studies between
generalization and invariance, focusing on rotation and grayscale
transformations. From a model-centric view, we observe generalization and
invariance of different models exhibit a strong linear relationship, on both
in-distribution and out-of-distribution datasets. From a dataset-centric view,
we find a certain model's accuracy and invariance linearly correlated on
different test sets. Apart from these major findings, other minor but
interesting insights are also discussed.
Related papers
- Approximation-Generalization Trade-offs under (Approximate) Group
Equivariance [3.0458514384586395]
Group equivariant neural networks have demonstrated impressive performance across various domains and applications such as protein and drug design.
We show how models capturing task-specific symmetries lead to improved generalization.
We examine the more general question of model mis-specification when the model symmetries don't align with the data symmetries.
arXiv Detail & Related papers (2023-05-27T22:53:37Z) - The Lie Derivative for Measuring Learned Equivariance [84.29366874540217]
We study the equivariance properties of hundreds of pretrained models, spanning CNNs, transformers, and Mixer architectures.
We find that many violations of equivariance can be linked to spatial aliasing in ubiquitous network layers, such as pointwise non-linearities.
For example, transformers can be more equivariant than convolutional neural networks after training.
arXiv Detail & Related papers (2022-10-06T15:20:55Z) - Studying Generalization Through Data Averaging [0.0]
We study train and test performance, as well as the generalization gap given by the mean of their difference over different data set samples.
We predict some aspects about how the generalization gap and model train and test performance vary as a function of SGD noise.
arXiv Detail & Related papers (2022-06-28T00:03:40Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Equivariance Discovery by Learned Parameter-Sharing [153.41877129746223]
We study how to discover interpretable equivariances from data.
Specifically, we formulate this discovery process as an optimization problem over a model's parameter-sharing schemes.
Also, we theoretically analyze the method for Gaussian data and provide a bound on the mean squared gap between the studied discovery scheme and the oracle scheme.
arXiv Detail & Related papers (2022-04-07T17:59:19Z) - Counterfactual Invariance to Spurious Correlations: Why and How to Pass
Stress Tests [87.60900567941428]
A spurious correlation' is the dependence of a model on some aspect of the input data that an analyst thinks shouldn't matter.
In machine learning, these have a know-it-when-you-see-it character.
We study stress testing using the tools of causal inference.
arXiv Detail & Related papers (2021-05-31T14:39:38Z) - Why do classifier accuracies show linear trends under distribution
shift? [58.40438263312526]
accuracies of models on one data distribution are approximately linear functions of the accuracies on another distribution.
We assume the probability that two models agree in their predictions is higher than what we can infer from their accuracy levels alone.
We show that a linear trend must occur when evaluating models on two distributions unless the size of the distribution shift is large.
arXiv Detail & Related papers (2020-12-31T07:24:30Z) - Memorizing without overfitting: Bias, variance, and interpolation in
over-parameterized models [0.0]
The bias-variance trade-off is a central concept in supervised learning.
Modern Deep Learning methods flout this dogma, achieving state-of-the-art performance.
arXiv Detail & Related papers (2020-10-26T22:31:04Z) - What causes the test error? Going beyond bias-variance via ANOVA [21.359033212191218]
Modern machine learning methods are often overparametrized, allowing adaptation to the data at a fine level.
Recent work aimed to understand in greater depth why overparametrization is helpful for generalization.
We propose using the analysis of variance (ANOVA) to decompose the variance in the test error in a symmetric way.
arXiv Detail & Related papers (2020-10-11T05:21:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.