Representation Learning Through Latent Canonicalizations
- URL: http://arxiv.org/abs/2002.11829v1
- Date: Wed, 26 Feb 2020 22:50:12 GMT
- Title: Representation Learning Through Latent Canonicalizations
- Authors: Or Litany, Ari Morcos, Srinath Sridhar, Leonidas Guibas, Judy Hoffman
- Abstract summary: We seek to learn a representation on a large annotated data source that generalizes to a target domain using limited new supervision.
We relax the requirement of explicit latent disentanglement and instead encourage linearity of individual factors of variation.
We demonstrate experimentally that our method helps reduce the number of observations needed to generalize to a similar target domain when compared to a number of supervised baselines.
- Score: 24.136856168381502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We seek to learn a representation on a large annotated data source that
generalizes to a target domain using limited new supervision. Many prior
approaches to this problem have focused on learning "disentangled"
representations so that as individual factors vary in a new domain, only a
portion of the representation need be updated. In this work, we seek the
generalization power of disentangled representations, but relax the requirement
of explicit latent disentanglement and instead encourage linearity of
individual factors of variation by requiring them to be manipulable by learned
linear transformations. We dub these transformations latent canonicalizers, as
they aim to modify the value of a factor to a pre-determined (but arbitrary)
canonical value (e.g., recoloring the image foreground to black). Assuming a
source domain with access to meta-labels specifying the factors of variation
within an image, we demonstrate experimentally that our method helps reduce the
number of observations needed to generalize to a similar target domain when
compared to a number of supervised baselines.
Related papers
- Boundless Across Domains: A New Paradigm of Adaptive Feature and Cross-Attention for Domain Generalization in Medical Image Segmentation [1.93061220186624]
Domain-invariant representation learning is a powerful method for domain generalization.
Previous approaches face challenges such as high computational demands, training instability, and limited effectiveness with high-dimensional data.
We propose an Adaptive Feature Blending (AFB) method that generates out-of-distribution samples while exploring the in-distribution space.
arXiv Detail & Related papers (2024-11-22T12:06:24Z) - Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - BayeSeg: Bayesian Modeling for Medical Image Segmentation with
Interpretable Generalizability [15.410162313242958]
We propose an interpretable Bayesian framework (BayeSeg) to enhance model generalizability for medical image segmentation.
Specifically, we first decompose an image into a spatial-correlated variable and a spatial-variant variable, assigning hierarchical Bayesian priors to explicitly force them to model the domain-stable shape and domain-specific appearance information respectively.
Finally, we develop a variational Bayesian framework to infer the posterior distributions of these explainable variables.
arXiv Detail & Related papers (2023-03-03T04:48:37Z) - An Image is Worth More Than a Thousand Words: Towards Disentanglement in
the Wild [34.505472771669744]
Unsupervised disentanglement has been shown to be theoretically impossible without inductive biases on the models and the data.
We propose a method for disentangling a set of factors which are only partially labeled, as well as separating the complementary set of residual factors.
arXiv Detail & Related papers (2021-06-29T17:54:24Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - Semantic Distribution-aware Contrastive Adaptation for Semantic
Segmentation [50.621269117524925]
Domain adaptive semantic segmentation refers to making predictions on a certain target domain with only annotations of a specific source domain.
We present a semantic distribution-aware contrastive adaptation algorithm that enables pixel-wise representation alignment.
We evaluate SDCA on multiple benchmarks, achieving considerable improvements over existing algorithms.
arXiv Detail & Related papers (2021-05-11T13:21:25Z) - Continual Adaptation of Visual Representations via Domain Randomization
and Meta-learning [21.50683576864347]
Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature.
We show that one way to learn models that are inherently more robust against forgetting is domain randomization.
We devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains.
arXiv Detail & Related papers (2020-12-08T09:54:51Z) - Few-shot Image Generation with Elastic Weight Consolidation [53.556446614013105]
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
We adapt a pretrained model, without introducing any additional parameters, to the few examples of the target domain.
We demonstrate the effectiveness of our algorithm by generating high-quality results of different target domains.
arXiv Detail & Related papers (2020-12-04T18:57:13Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - FDA: Fourier Domain Adaptation for Semantic Segmentation [82.4963423086097]
We describe a simple method for unsupervised domain adaptation, whereby the discrepancy between the source and target distributions is reduced by swapping the low-frequency spectrum of one with the other.
We illustrate the method in semantic segmentation, where densely annotated images are aplenty in one domain, but difficult to obtain in another.
Our results indicate that even simple procedures can discount nuisance variability in the data that more sophisticated methods struggle to learn away.
arXiv Detail & Related papers (2020-04-11T22:20:48Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.