On the Transfer of Disentangled Representations in Realistic Settings
- URL: http://arxiv.org/abs/2010.14407v2
- Date: Thu, 11 Mar 2021 11:43:10 GMT
- Title: On the Transfer of Disentangled Representations in Realistic Settings
- Authors: Andrea Dittadi, Frederik Tr\"auble, Francesco Locatello, Manuel
W\"uthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, Bernhard Sch\"olkopf
- Abstract summary: We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images.
We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings.
- Score: 44.367245337475445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning meaningful representations that disentangle the underlying structure
of the data generating process is considered to be of key importance in machine
learning. While disentangled representations were found to be useful for
diverse tasks such as abstract reasoning and fair classification, their
scalability and real-world impact remain questionable. We introduce a new
high-resolution dataset with 1M simulated images and over 1,800 annotated
real-world images of the same setup. In contrast to previous work, this new
dataset exhibits correlations, a complex underlying structure, and allows to
evaluate transfer to unseen simulated and real-world settings where the encoder
i) remains in distribution or ii) is out of distribution. We propose new
architectures in order to scale disentangled representation learning to
realistic high-resolution settings and conduct a large-scale empirical study of
disentangled representations on this dataset. We observe that disentanglement
is a good predictor for out-of-distribution (OOD) task performance.
Related papers
- Transferring disentangled representations: bridging the gap between synthetic and real images [1.0760018917783072]
We investigate the possibility of leveraging synthetic data to learn general-purpose disentangled representations applicable to real data.
We propose a new interpretable intervention-based metric, to measure the quality of factors encoding in the representation.
Our results indicate that some level of disentanglement, transferring a representation from synthetic to real data, is possible and effective.
arXiv Detail & Related papers (2024-09-26T16:25:48Z) - Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - The Devil in the Details: Simple and Effective Optical Flow Synthetic
Data Generation [19.945859289278534]
We show that the required characteristics in an optical flow dataset are rather simple and present a simpler synthetic data generation method.
With 2D motion-based datasets, we systematically analyze the simplest yet critical factors for generating synthetic datasets.
arXiv Detail & Related papers (2023-08-14T18:01:45Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Decoupling Local and Global Representations of Time Series [38.73548222141307]
We propose a novel generative approach for learning representations for the global and local factors of variation in time series.
In experiments, we demonstrate successful recovery of the true local and global variability factors on simulated data.
We believe that the proposed way of defining representations is beneficial for data modelling and yields better insights into the complexity of real-world data.
arXiv Detail & Related papers (2022-02-04T17:46:04Z) - Understanding Dynamics of Nonlinear Representation Learning and Its
Application [12.697842097171119]
We study the dynamics of implicit nonlinear representation learning.
We show that the data-architecture alignment condition is sufficient for the global convergence.
We derive a new training framework, which satisfies the data-architecture alignment condition without assuming it.
arXiv Detail & Related papers (2021-06-28T16:31:30Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Mutual Information Maximization for Robust Plannable Representations [82.83676853746742]
We present MIRO, an information theoretic representational learning algorithm for model-based reinforcement learning.
We show that our approach is more robust than reconstruction objectives in the presence of distractors and cluttered scenes.
arXiv Detail & Related papers (2020-05-16T21:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.