Desiderata for Representation Learning: A Causal Perspective
- URL: http://arxiv.org/abs/2109.03795v1
- Date: Wed, 8 Sep 2021 17:33:54 GMT
- Title: Desiderata for Representation Learning: A Causal Perspective
- Authors: Yixin Wang, Michael I. Jordan
- Abstract summary: We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
- Score: 104.3711759578494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning constructs low-dimensional representations to
summarize essential features of high-dimensional data. This learning problem is
often approached by describing various desiderata associated with learned
representations; e.g., that they be non-spurious, efficient, or disentangled.
It can be challenging, however, to turn these intuitive desiderata into formal
criteria that can be measured and enhanced based on observed data. In this
paper, we take a causal perspective on representation learning, formalizing
non-spuriousness and efficiency (in supervised representation learning) and
disentanglement (in unsupervised representation learning) using counterfactual
quantities and observable consequences of causal assertions. This yields
computable metrics that can be used to assess the degree to which
representations satisfy the desiderata of interest and learn non-spurious and
disentangled representations from single observational datasets.
Related papers
- Specify Robust Causal Representation from Mixed Observations [35.387451486213344]
Learning representations purely from observations concerns the problem of learning a low-dimensional, compact representation which is beneficial to prediction models.
We develop a learning method to learn such representation from observational data by regularizing the learning procedure with mutual information measures.
We theoretically and empirically show that the models trained with the learned causal representations are more robust under adversarial attacks and distribution shifts.
arXiv Detail & Related papers (2023-10-21T02:18:35Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - RELAX: Representation Learning Explainability [10.831313203043514]
We propose RELAX, which is the first approach for attribution-based explanations of representations.
ReLAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself.
We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning.
arXiv Detail & Related papers (2021-12-19T14:51:31Z) - A Tutorial on Learning Disentangled Representations in the Imaging
Domain [13.320565017546985]
Disentangled representation learning has been proposed as an approach to learning general representations.
A good general representation can be readily fine-tuned for new target tasks using modest amounts of data.
Disentangled representations can offer model explainability and can help us understand the underlying causal relations of the factors of variation.
arXiv Detail & Related papers (2021-08-26T21:44:10Z) - Which Mutual-Information Representation Learning Objectives are
Sufficient for Control? [80.2534918595143]
Mutual information provides an appealing formalism for learning representations of data.
This paper formalizes the sufficiency of a state representation for learning and representing the optimal policy.
Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP.
arXiv Detail & Related papers (2021-06-14T10:12:34Z) - Odd-One-Out Representation Learning [1.6822770693792826]
We show that a weakly-supervised downstream task based on odd-one-out observations is suitable for model selection.
We also show that a bespoke metric-learning VAE model which performs highly on this task also out-performs other standard unsupervised and a weakly-supervised disentanglement model.
arXiv Detail & Related papers (2020-12-14T22:01:15Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z) - Weakly-Supervised Disentanglement Without Compromises [53.55580957483103]
Intelligent agents should be able to learn useful representations by observing changes in their environment.
We model such observations as pairs of non-i.i.d. images sharing at least one of the underlying factors of variation.
We show that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations.
arXiv Detail & Related papers (2020-02-07T16:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.