Equivariance and Invariance Inductive Bias for Learning from
Insufficient Data
- URL: http://arxiv.org/abs/2207.12258v1
- Date: Mon, 25 Jul 2022 15:26:19 GMT
- Title: Equivariance and Invariance Inductive Bias for Learning from
Insufficient Data
- Authors: Tan Wang, Qianru Sun, Sugiri Pranata, Karlekar Jayashree, Hanwang
Zhang
- Abstract summary: We show why insufficient data renders the model more easily biased to the limited training environments that are usually different from testing.
We propose a class-wise invariant risk minimization (IRM) that efficiently tackles the challenge of missing environmental annotation in conventional IRM.
- Score: 65.42329520528223
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are interested in learning robust models from insufficient data, without
the need for any externally pre-trained checkpoints. First, compared to
sufficient data, we show why insufficient data renders the model more easily
biased to the limited training environments that are usually different from
testing. For example, if all the training swan samples are "white", the model
may wrongly use the "white" environment to represent the intrinsic class swan.
Then, we justify that equivariance inductive bias can retain the class feature
while invariance inductive bias can remove the environmental feature, leaving
the class feature that generalizes to any environmental changes in testing. To
impose them on learning, for equivariance, we demonstrate that any
off-the-shelf contrastive-based self-supervised feature learning method can be
deployed; for invariance, we propose a class-wise invariant risk minimization
(IRM) that efficiently tackles the challenge of missing environmental
annotation in conventional IRM. State-of-the-art experimental results on
real-world benchmarks (VIPriors, ImageNet100 and NICO) validate the great
potential of equivariance and invariance in data-efficient learning. The code
is available at https://github.com/Wangt-CN/EqInv
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Effect-Invariant Mechanisms for Policy Generalization [3.701112941066256]
It has been suggested to exploit invariant conditional distributions to learn models that generalize better to unseen environments.
We introduce a relaxation of full invariance called effect-invariance and prove that it is sufficient, under suitable assumptions, for zero-shot policy generalization.
We present empirical results using simulated data and a mobile health intervention dataset to demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-19T14:50:24Z) - Feature-Level Debiased Natural Language Understanding [86.8751772146264]
Existing natural language understanding (NLU) models often rely on dataset biases to achieve high performance on specific datasets.
We propose debiasing contrastive learning (DCT) to mitigate biased latent features and neglect the dynamic nature of bias.
DCT outperforms state-of-the-art baselines on out-of-distribution datasets while maintaining in-distribution performance.
arXiv Detail & Related papers (2022-12-11T06:16:14Z) - Malign Overfitting: Interpolation Can Provably Preclude Invariance [30.776243638012314]
We show that "benign overfitting" in which models generalize well despite interpolating might not favorably extend to settings in which robustness or fairness are desirable.
We propose and analyze an algorithm that successfully learns a non-interpolating classifier that is provably invariant.
arXiv Detail & Related papers (2022-11-28T19:17:31Z) - On the Strong Correlation Between Model Invariance and Generalization [54.812786542023325]
Generalization captures a model's ability to classify unseen data.
Invariance measures consistency of model predictions on transformations of the data.
From a dataset-centric view, we find a certain model's accuracy and invariance linearly correlated on different test sets.
arXiv Detail & Related papers (2022-07-14T17:08:25Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Improving Contrastive Learning on Imbalanced Seed Data via Open-World
Sampling [96.8742582581744]
We present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK)
MAK follows three simple principles: tailness, proximity, and diversity.
We demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features.
arXiv Detail & Related papers (2021-11-01T15:09:41Z) - Memorizing without overfitting: Bias, variance, and interpolation in
over-parameterized models [0.0]
The bias-variance trade-off is a central concept in supervised learning.
Modern Deep Learning methods flout this dogma, achieving state-of-the-art performance.
arXiv Detail & Related papers (2020-10-26T22:31:04Z) - What causes the test error? Going beyond bias-variance via ANOVA [21.359033212191218]
Modern machine learning methods are often overparametrized, allowing adaptation to the data at a fine level.
Recent work aimed to understand in greater depth why overparametrization is helpful for generalization.
We propose using the analysis of variance (ANOVA) to decompose the variance in the test error in a symmetric way.
arXiv Detail & Related papers (2020-10-11T05:21:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.