Prediction under Latent Subgroup Shifts with High-Dimensional
Observations
- URL: http://arxiv.org/abs/2306.13472v1
- Date: Fri, 23 Jun 2023 12:26:24 GMT
- Title: Prediction under Latent Subgroup Shifts with High-Dimensional
Observations
- Authors: William I. Walker, Arthur Gretton, Maneesh Sahani
- Abstract summary: We introduce a new approach to prediction in graphical models with latent-shift adaptation.
Our novel form of RPM identifies causal latent structure in the source environment, and adapts properly to predict in the target.
- Score: 30.433078066683848
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a new approach to prediction in graphical models with
latent-shift adaptation, i.e., where source and target environments differ in
the distribution of an unobserved confounding latent variable. Previous work
has shown that as long as "concept" and "proxy" variables with appropriate
dependence are observed in the source environment, the latent-associated
distributional changes can be identified, and target predictions adapted
accurately. However, practical estimation methods do not scale well when the
observations are complex and high-dimensional, even if the confounding latent
is categorical. Here we build upon a recently proposed probabilistic
unsupervised learning framework, the recognition-parametrised model (RPM), to
recover low-dimensional, discrete latents from image observations. Applied to
the problem of latent shifts, our novel form of RPM identifies causal latent
structure in the source environment, and adapts properly to predict in the
target. We demonstrate results in settings where predictor and proxy are
high-dimensional images, a context to which previous methods fail to scale.
Related papers
- Learning When the Concept Shifts: Confounding, Invariance, and Dimension Reduction [5.38274042816001]
In observational data, the distribution shift is often driven by unobserved confounding factors.
This motivates us to study the domain adaptation problem with observational data.
We show a model that uses the learned lower-dimensional subspace can incur nearly ideal gap between target and source risk.
arXiv Detail & Related papers (2024-06-22T17:43:08Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Causality-oriented robustness: exploiting general additive interventions [3.871660145364189]
In this paper, we focus on causality-oriented robustness and propose Distributional Robustness via Invariant Gradients (DRIG)
In a linear setting, we prove that DRIG yields predictions that are robust among a data-dependent class of distribution shifts.
We extend our approach to the semi-supervised domain adaptation setting to further improve prediction performance.
arXiv Detail & Related papers (2023-07-18T16:22:50Z) - Quantification of Uncertainties in Deep Learning-based Environment
Perception [0.7874708385247353]
We introduce a novel Deep Learning-based method to perceive the environment of a vehicle based on radar scans.
Our algorithm is capable of differentiating uncertainties in its predictions as being related to an inadequate model.
We prove that uncertainties in the model output correlate with the precision of its predictions.
arXiv Detail & Related papers (2023-06-05T16:35:01Z) - Unsupervised representation learning with recognition-parametrised
probabilistic models [12.865596223775649]
We introduce a new approach to probabilistic unsupervised learning based on the recognition-parametrised model ( RPM)
Under the key assumption that observations are conditionally independent given latents, the RPM combines parametric prior observation-conditioned latent distributions with non-parametric observationfactors.
The RPM provides a powerful framework to discover meaningful latent structure underlying observational data, a function critical to both animal and artificial intelligence.
arXiv Detail & Related papers (2022-09-13T00:33:21Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z) - HYPER: Learned Hybrid Trajectory Prediction via Factored Inference and
Adaptive Sampling [27.194900145235007]
We introduce HYPER, a general and expressive hybrid prediction framework.
By modeling traffic agents as a hybrid discrete-continuous system, our approach is capable of predicting discrete intent changes over time.
We train and validate our model on the Argoverse dataset, and demonstrate its effectiveness through comprehensive ablation studies and comparisons with state-of-the-art models.
arXiv Detail & Related papers (2021-10-05T20:20:10Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.