Shifts: A Dataset of Real Distributional Shift Across Multiple
Large-Scale Tasks
- URL: http://arxiv.org/abs/2107.07455v1
- Date: Thu, 15 Jul 2021 16:59:34 GMT
- Title: Shifts: A Dataset of Real Distributional Shift Across Multiple
Large-Scale Tasks
- Authors: Andrey Malinin and Neil Band and German Chesnokov and Yarin Gal and
Mark J. F. Gales and Alexey Noskov and Andrey Ploskonosov and Liudmila
Prokhorenkova and Ivan Provilkov and Vatsal Raina and Vyas Raina and Mariya
Shmatova and Panos Tigas and Boris Yangel
- Abstract summary: Given the current state of the field, a standardized large-scale dataset of tasks across a range of modalities affected by distributional shifts is necessary.
We propose the emphShifts dataset for evaluation of uncertainty estimates and robustness to distributional shift.
- Score: 44.61070965407907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been significant research done on developing methods for improving
robustness to distributional shift and uncertainty estimation. In contrast,
only limited work has examined developing standard datasets and benchmarks for
assessing these approaches. Additionally, most work on uncertainty estimation
and robustness has developed new techniques based on small-scale regression or
image classification tasks. However, many tasks of practical interest have
different modalities, such as tabular data, audio, text, or sensor data, which
offer significant challenges involving regression and discrete or continuous
structured prediction. Thus, given the current state of the field, a
standardized large-scale dataset of tasks across a range of modalities affected
by distributional shifts is necessary. This will enable researchers to
meaningfully evaluate the plethora of recently developed uncertainty
quantification methods, as well as assessment criteria and state-of-the-art
baselines. In this work, we propose the \emph{Shifts Dataset} for evaluation of
uncertainty estimates and robustness to distributional shift. The dataset,
which has been collected from industrial sources and services, is composed of
three tasks, with each corresponding to a particular data modality: tabular
weather prediction, machine translation, and self-driving car (SDC) vehicle
motion prediction. All of these data modalities and tasks are affected by real,
`in-the-wild' distributional shifts and pose interesting challenges with
respect to uncertainty estimation. In this work we provide a description of the
dataset and baseline results for all tasks.
Related papers
- A Dataset for Evaluating Online Anomaly Detection Approaches for Discrete Multivariate Time Series [0.01874930567916036]
Current publicly available datasets are too small, not diverse and feature trivial anomalies.
We propose a solution: a diverse, extensive, and non-trivial dataset generated via state-of-the-art simulation tools.
We make different versions of the dataset available, where training and test subsets are offered in contaminated and clean versions.
As expected, the baseline experimentation shows that the approaches trained on the semi-supervised version of the dataset outperform their unsupervised counterparts.
arXiv Detail & Related papers (2024-11-21T09:03:12Z) - Binary Quantification and Dataset Shift: An Experimental Investigation [54.14283123210872]
Quantification is the supervised learning task that consists of training predictors of the class prevalence values of sets of unlabelled data.
The relationship between quantification and other types of dataset shift remains, by and large, unexplored.
We propose a fine-grained taxonomy of types of dataset shift, by establishing protocols for the generation of datasets affected by these types of shift.
arXiv Detail & Related papers (2023-10-06T20:11:27Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Shifts 2.0: Extending The Dataset of Real Distributional Shifts [25.31085238930148]
We extend the Shifts dataset with two datasets sourced from industrial, high-risk applications of high societal importance.
We consider the tasks of segmentation of white matter Multiple Sclerosis lesions in 3D magnetic resonance brain images and the estimation of power consumption in marine cargo vessels.
These new datasets will allow researchers to further explore robust generalization and uncertainty estimation in new situations.
arXiv Detail & Related papers (2022-06-30T16:51:52Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Evaluating Predictive Uncertainty and Robustness to Distributional Shift
Using Real World Data [0.0]
We propose metrics for general regression tasks using the Shifts Weather Prediction dataset.
We also present an evaluation of the baseline methods using these metrics.
arXiv Detail & Related papers (2021-11-08T17:32:10Z) - Estimating Predictive Uncertainty Under Program Data Distribution Shift [3.603932017607092]
Well-defined uncertainty indicates whether a model's output should (or should not) be trusted.
Existing uncertainty approaches assume that testing samples from a different data distribution would induce unreliable model predictions.
arXiv Detail & Related papers (2021-07-23T01:50:22Z) - Evaluating Model Robustness and Stability to Dataset Shift [7.369475193451259]
We propose a framework for analyzing stability of machine learning models.
We use the original evaluation data to determine distributions under which the algorithm performs poorly.
We estimate the algorithm's performance on the "worst-case" distribution.
arXiv Detail & Related papers (2020-10-28T17:35:39Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.