A Tutorial on Learning Disentangled Representations in the Imaging
Domain
- URL: http://arxiv.org/abs/2108.12043v1
- Date: Thu, 26 Aug 2021 21:44:10 GMT
- Title: A Tutorial on Learning Disentangled Representations in the Imaging
Domain
- Authors: Xiao Liu, Pedro Sanchez, Spyridon Thermos, Alison Q. O'Neil, and
Sotirios A.Tsaftaris
- Abstract summary: Disentangled representation learning has been proposed as an approach to learning general representations.
A good general representation can be readily fine-tuned for new target tasks using modest amounts of data.
Disentangled representations can offer model explainability and can help us understand the underlying causal relations of the factors of variation.
- Score: 13.320565017546985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Disentangled representation learning has been proposed as an approach to
learning general representations. This can be done in the absence of, or with
limited, annotations. A good general representation can be readily fine-tuned
for new target tasks using modest amounts of data, or even be used directly in
unseen domains achieving remarkable performance in the corresponding task. This
alleviation of the data and annotation requirements offers tantalising
prospects for tractable and affordable applications in computer vision and
healthcare. Finally, disentangled representations can offer model
explainability and can help us understand the underlying causal relations of
the factors of variation, increasing their suitability for real-world
deployment. In this tutorial paper, we will offer an overview of the
disentangled representation learning, its building blocks and criteria, and
discuss applications in computer vision and medical imaging. We conclude our
tutorial by presenting the identified opportunities for the integration of
recent machine learning advances into disentanglement, as well as the remaining
challenges.
Related papers
- Debiasing Graph Representation Learning based on Information Bottleneck [18.35405511009332]
We present the design and implementation of GRAFair, a new framework based on a variational graph auto-encoder.
The crux of GRAFair is the Conditional Fairness Bottleneck, where the objective is to capture the trade-off between the utility of representations and sensitive information of interest.
Experiments on various real-world datasets demonstrate the effectiveness of our proposed method in terms of fairness, utility, robustness, and stability.
arXiv Detail & Related papers (2024-09-02T16:45:23Z) - Augment to Interpret: Unsupervised and Inherently Interpretable Graph
Embeddings [0.0]
In this paper, we study graph representation learning and we show that data augmentation that preserves semantics can be learned and used to produce interpretations.
Our framework, which we named INGENIOUS, creates inherently interpretable embeddings and eliminates the need for costly additional post-hoc analysis.
arXiv Detail & Related papers (2023-09-28T16:21:40Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Desiderata for Representation Learning: A Causal Perspective [104.3711759578494]
We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
arXiv Detail & Related papers (2021-09-08T17:33:54Z) - Fair Representation Learning using Interpolation Enabled Disentanglement [9.043741281011304]
We propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.
To address the former, we propose the method FRIED, Fair Representation learning using Interpolation Enabled Disentanglement.
arXiv Detail & Related papers (2021-07-31T17:32:12Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.