Domain Generalization In Robust Invariant Representation
- URL: http://arxiv.org/abs/2304.03431v2
- Date: Sun, 25 Feb 2024 03:23:31 GMT
- Title: Domain Generalization In Robust Invariant Representation
- Authors: Gauri Gupta, Ritvik Kapila, Keshav Gupta, Ramesh Raskar
- Abstract summary: In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
- Score: 10.132611239890345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised approaches for learning representations invariant to common
transformations are used quite often for object recognition. Learning
invariances makes models more robust and practical to use in real-world
scenarios. Since data transformations that do not change the intrinsic
properties of the object cause the majority of the complexity in recognition
tasks, models that are invariant to these transformations help reduce the
amount of training data required. This further increases the model's efficiency
and simplifies training. In this paper, we investigate the generalization of
invariant representations on out-of-distribution data and try to answer the
question: Do model representations invariant to some transformations in a
particular seen domain also remain invariant in previously unseen domains?
Through extensive experiments, we demonstrate that the invariant model learns
unstructured latent representations that are robust to distribution shifts,
thus making invariance a desirable property for training in
resource-constrained settings.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Interpreting Equivariant Representations [5.325297567945828]
In this paper, we demonstrate that the inductive bias imposed on the by an equivariant model must also be taken into account when using latent representations.
We show how not accounting for the inductive biases leads to decreased performance on downstream tasks, and vice versa.
arXiv Detail & Related papers (2024-01-23T09:43:30Z) - Enhancing Evolving Domain Generalization through Dynamic Latent
Representations [47.3810472814143]
We propose a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
MISTS learns both dynamic and invariant features via a new framework called Mutual Information-Based Sequential Autoencoders (MISTS)
Our experimental results on both synthetic and real-world datasets demonstrate that MISTS succeeds in capturing both evolving and invariant information.
arXiv Detail & Related papers (2024-01-16T16:16:42Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - PAC Generalization via Invariant Representations [41.02828564338047]
We consider the notion of $epsilon$-approximate invariance in a finite sample setting.
Inspired by PAC learning, we obtain finite-sample out-of-distribution generalization guarantees.
Our results show bounds that do not scale in ambient dimension when intervention sites are restricted to lie in a constant size subset of in-degree bounded nodes.
arXiv Detail & Related papers (2022-05-30T15:50:14Z) - Out-of-distribution Generalization with Causal Invariant Transformations [17.18953986654873]
In this work, we tackle the OOD problem without explicitly recovering the causal feature.
Under the setting of invariant causal mechanism, we theoretically show that if all such transformations are available, then we can learn a minimax optimal model.
Noticing that knowing a complete set of these causal invariant transformations may be impractical, we further show that it suffices to know only a subset of these transformations.
arXiv Detail & Related papers (2022-03-22T08:04:38Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Staying in Shape: Learning Invariant Shape Representations using
Contrastive Learning [5.100152971410397]
Most existing invariant shape representations arehandcrafted, and previous work on learning shaperepresentations do not focus on producing invariants.
We show experimentally that our methodoutperforms previous unsupervised learning ap-proaches in both effectiveness and robustness.
arXiv Detail & Related papers (2021-07-08T00:53:24Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.