A Commentary on the Unsupervised Learning of Disentangled
Representations
- URL: http://arxiv.org/abs/2007.14184v1
- Date: Tue, 28 Jul 2020 13:13:45 GMT
- Title: A Commentary on the Unsupervised Learning of Disentangled
Representations
- Authors: Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar R\"atsch,
Sylvain Gelly, Bernhard Sch\"olkopf, Olivier Bachem
- Abstract summary: The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.
We discuss the theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases.
- Score: 63.042651834453544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of the unsupervised learning of disentangled representations is to
separate the independent explanatory factors of variation in the data without
access to supervision. In this paper, we summarize the results of Locatello et
al., 2019, and focus on their implications for practitioners. We discuss the
theoretical result showing that the unsupervised learning of disentangled
representations is fundamentally impossible without inductive biases and the
practical challenges it entails. Finally, we comment on our experimental
findings, highlighting the limitations of state-of-the-art approaches and
directions for future research.
Related papers
- Self-Distilled Disentangled Learning for Counterfactual Prediction [49.84163147971955]
We propose the Self-Distilled Disentanglement framework, known as $SD2$.
Grounded in information theory, it ensures theoretically sound independent disentangled representations without intricate mutual information estimator designs.
Our experiments, conducted on both synthetic and real-world datasets, confirm the effectiveness of our approach.
arXiv Detail & Related papers (2024-06-09T16:58:19Z) - Embarrassingly Simple Unsupervised Aspect Based Sentiment Tuple Extraction [0.6429156819529861]
We propose a simple and novel unsupervised approach to extract opinion terms and the corresponding sentiment polarity for aspect terms in a sentence.
Our experimental evaluations, conducted on four benchmark datasets, demonstrate compelling performance to extract the oriented aspect opinion words.
arXiv Detail & Related papers (2024-04-21T19:20:42Z) - Robust Contrastive Learning With Theory Guarantee [25.57187964518637]
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information.
Our work develops rigorous theories to dissect and identify which components in the unsupervised loss can help improve the robust supervised loss.
arXiv Detail & Related papers (2023-11-16T08:39:58Z) - Augment to Interpret: Unsupervised and Inherently Interpretable Graph
Embeddings [0.0]
In this paper, we study graph representation learning and we show that data augmentation that preserves semantics can be learned and used to produce interpretations.
Our framework, which we named INGENIOUS, creates inherently interpretable embeddings and eliminates the need for costly additional post-hoc analysis.
arXiv Detail & Related papers (2023-09-28T16:21:40Z) - A Double Machine Learning Approach to Combining Experimental and Observational Data [59.29868677652324]
We propose a double machine learning approach to combine experimental and observational studies.
Our framework tests for violations of external validity and ignorability under milder assumptions.
arXiv Detail & Related papers (2023-07-04T02:53:11Z) - On Causally Disentangled Representations [18.122893077772993]
We present an analysis of disentangled representations through the notion of disentangled causal process.
We show that our metrics capture the desiderata of disentangled causal process.
We perform an empirical study on state of the art disentangled representation learners using our metrics and dataset to evaluate them from causal perspective.
arXiv Detail & Related papers (2021-12-10T18:56:27Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.