Semantic uncertainty intervals for disentangled latent spaces
- URL: http://arxiv.org/abs/2207.10074v1
- Date: Wed, 20 Jul 2022 17:58:10 GMT
- Title: Semantic uncertainty intervals for disentangled latent spaces
- Authors: Swami Sankaranarayanan, Anastasios N. Angelopoulos, Stephen Bates,
Yaniv Romano, Phillip Isola
- Abstract summary: We provide principled uncertainty intervals guaranteed to contain the true semantic factors for any underlying generative model.
This technique reliably communicates semantically meaningful, principled, and instance-adaptive uncertainty in inverse problems like image super-resolution and image completion.
- Score: 30.254614465166245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meaningful uncertainty quantification in computer vision requires reasoning
about semantic information -- say, the hair color of the person in a photo or
the location of a car on the street. To this end, recent breakthroughs in
generative modeling allow us to represent semantic information in disentangled
latent spaces, but providing uncertainties on the semantic latent variables has
remained challenging. In this work, we provide principled uncertainty intervals
that are guaranteed to contain the true semantic factors for any underlying
generative model. The method does the following: (1) it uses quantile
regression to output a heuristic uncertainty interval for each element in the
latent space (2) calibrates these uncertainties such that they contain the true
value of the latent for a new, unseen input. The endpoints of these calibrated
intervals can then be propagated through the generator to produce interpretable
uncertainty visualizations for each semantic factor. This technique reliably
communicates semantically meaningful, principled, and instance-adaptive
uncertainty in inverse problems like image super-resolution and image
completion.
Related papers
- Uncertainty-boosted Robust Video Activity Anticipation [72.14155465769201]
Video activity anticipation aims to predict what will happen in the future, embracing a broad application prospect ranging from robot vision to autonomous driving.
Despite the recent progress, the data uncertainty issue, reflected as the content evolution process and dynamic correlation in event labels, has been somehow ignored.
We propose an uncertainty-boosted robust video activity anticipation framework, which generates uncertainty values to indicate the credibility of the anticipation results.
arXiv Detail & Related papers (2024-04-29T12:31:38Z) - Principal Uncertainty Quantification with Spatial Correlation for Image
Restoration Problems [35.46703074728443]
PUQ -- Principal Uncertainty Quantification -- is a novel definition and corresponding analysis of uncertainty regions.
We derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region.
Our approach is verified through experiments on image colorization, super-resolution, and inpainting.
arXiv Detail & Related papers (2023-05-17T11:08:13Z) - Probabilistic Contrastive Learning Recovers the Correct Aleatoric
Uncertainty of Ambiguous Inputs [21.38099300190815]
Contrastively trained encoders have recently been proven to invert the data-generating process.
We extend the common InfoNCE objective and encoders to predict latent distributions instead of points.
arXiv Detail & Related papers (2023-02-06T15:30:08Z) - Weakly Supervised Representation Learning with Sparse Perturbations [82.39171485023276]
We show that if one has weak supervision from observations generated by sparse perturbations of the latent variables, identification is achievable under unknown continuous latent distributions.
We propose a natural estimation procedure based on this theory and illustrate it on low-dimensional synthetic and image-based experiments.
arXiv Detail & Related papers (2022-06-02T15:30:07Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Where and What? Examining Interpretable Disentangled Representations [96.32813624341833]
Capturing interpretable variations has long been one of the goals in disentanglement learning.
Unlike the independence assumption, interpretability has rarely been exploited to encourage disentanglement in the unsupervised setting.
In this paper, we examine the interpretability of disentangled representations by investigating two questions: where to be interpreted and what to be interpreted.
arXiv Detail & Related papers (2021-04-07T11:22:02Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Modal Uncertainty Estimation via Discrete Latent Representation [4.246061945756033]
We introduce a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures.
Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-25T05:29:34Z) - Learning to Manipulate Individual Objects in an Image [71.55005356240761]
We describe a method to train a generative model with latent factors that are independent and localized.
This means that perturbing the latent variables affects only local regions of the synthesized image, corresponding to objects.
Unlike other unsupervised generative models, ours enables object-centric manipulation, without requiring object-level annotations.
arXiv Detail & Related papers (2020-04-11T21:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.