Discovering Distribution Shifts using Latent Space Representations
- URL: http://arxiv.org/abs/2202.02339v1
- Date: Fri, 4 Feb 2022 19:00:16 GMT
- Title: Discovering Distribution Shifts using Latent Space Representations
- Authors: Leo Betthauser, Urszula Chajewska, Maurice Diesendruck, Rohith Pesala
- Abstract summary: It is non-trivial to assess a model's generalizability to new, candidate datasets.
We use embedding space geometry to propose a non-parametric framework for detecting distribution shifts.
- Score: 4.014524824655106
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Rapid progress in representation learning has led to a proliferation of
embedding models, and to associated challenges of model selection and practical
application. It is non-trivial to assess a model's generalizability to new,
candidate datasets and failure to generalize may lead to poor performance on
downstream tasks. Distribution shifts are one cause of reduced
generalizability, and are often difficult to detect in practice. In this paper,
we use the embedding space geometry to propose a non-parametric framework for
detecting distribution shifts, and specify two tests. The first test detects
shifts by establishing a robustness boundary, determined by an intelligible
performance criterion, for comparing reference and candidate datasets. The
second test detects shifts by featurizing and classifying multiple subsamples
of two datasets as in-distribution and out-of-distribution. In evaluation, both
tests detect model-impacting distribution shifts, in various shift scenarios,
for both synthetic and real-world datasets.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Can You Rely on Your Model Evaluation? Improving Model Evaluation with
Synthetic Test Data [75.20035991513564]
We introduce 3S Testing, a deep generative modeling framework to facilitate model evaluation.
Our experiments demonstrate that 3S Testing outperforms traditional baselines.
These results raise the question of whether we need a paradigm shift away from limited real test data towards synthetic test data.
arXiv Detail & Related papers (2023-10-25T10:18:44Z) - Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Connective Reconstruction-based Novelty Detection [3.7706789983985303]
Deep learning has enabled us to analyze real-world data which contain unexplained samples.
GAN-based approaches have been widely used to address this problem due to their ability to perform distribution fitting.
We propose a simple yet efficient reconstruction-based method that avoids adding complexities to compensate for the limitations of GAN models.
arXiv Detail & Related papers (2022-10-25T11:09:39Z) - Identifying the Context Shift between Test Benchmarks and Production
Data [1.2259552039796024]
There exists a performance gap between machine learning models' accuracy on dataset benchmarks and real-world production data.
We outline two methods for identifying changes in context that lead to distribution shifts and model prediction errors.
We present two case-studies to highlight the implicit assumptions underlying applied machine learning models that tend to lead to errors.
arXiv Detail & Related papers (2022-07-03T14:54:54Z) - Context-Aware Drift Detection [0.0]
Two-sample tests of homogeneity form the foundation upon which existing approaches to drift detection build.
We develop a more general drift detection framework built upon a foundation of two-sample tests for conditional distributional treatment effects.
arXiv Detail & Related papers (2022-03-16T14:23:02Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z) - Calibrated Adversarial Refinement for Stochastic Semantic Segmentation [5.849736173068868]
We present a strategy for learning a calibrated predictive distribution over semantic maps, where the probability associated with each prediction reflects its ground truth correctness likelihood.
We demonstrate the versatility and robustness of the approach by achieving state-of-the-art results on the multigrader LIDC dataset and on a modified Cityscapes dataset with injected ambiguities.
We show that the core design can be adapted to other tasks requiring learning a calibrated predictive distribution by experimenting on a toy regression dataset.
arXiv Detail & Related papers (2020-06-23T16:39:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.