Identifying the Context Shift between Test Benchmarks and Production
Data
- URL: http://arxiv.org/abs/2207.01059v1
- Date: Sun, 3 Jul 2022 14:54:54 GMT
- Title: Identifying the Context Shift between Test Benchmarks and Production
Data
- Authors: Matthew Groh
- Abstract summary: There exists a performance gap between machine learning models' accuracy on dataset benchmarks and real-world production data.
We outline two methods for identifying changes in context that lead to distribution shifts and model prediction errors.
We present two case-studies to highlight the implicit assumptions underlying applied machine learning models that tend to lead to errors.
- Score: 1.2259552039796024
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Across a wide variety of domains, there exists a performance gap between
machine learning models' accuracy on dataset benchmarks and real-world
production data. Despite the careful design of static dataset benchmarks to
represent the real-world, models often err when the data is out-of-distribution
relative to the data the models have been trained on. We can directly measure
and adjust for some aspects of distribution shift, but we cannot address sample
selection bias, adversarial perturbations, and non-stationarity without knowing
the data generation process. In this paper, we outline two methods for
identifying changes in context that lead to distribution shifts and model
prediction errors: leveraging human intuition and expert knowledge to identify
first-order contexts and developing dynamic benchmarks based on desiderata for
the data generation process. Furthermore, we present two case-studies to
highlight the implicit assumptions underlying applied machine learning models
that tend to lead to errors when attempting to generalize beyond test benchmark
datasets. By paying close attention to the role of context in each prediction
task, researchers can reduce context shift errors and increase generalization
performance.
Related papers
- A Data-Centric Perspective on Evaluating Machine Learning Models for Tabular Data [9.57464542357693]
This paper demonstrates that model-centric evaluations are biased, as real-world modeling pipelines often require dataset-specific preprocessing and feature engineering.
We select 10 relevant datasets from Kaggle competitions and implement expert-level preprocessing pipelines for each dataset.
After dataset-specific feature engineering, model rankings change considerably, performance differences decrease, and the importance of model selection reduces.
arXiv Detail & Related papers (2024-07-02T09:54:39Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Data-SUITE: Data-centric identification of in-distribution incongruous
examples [81.21462458089142]
Data-SUITE is a data-centric framework to identify incongruous regions of in-distribution (ID) data.
We empirically validate Data-SUITE's performance and coverage guarantees.
arXiv Detail & Related papers (2022-02-17T18:58:31Z) - Discovering Distribution Shifts using Latent Space Representations [4.014524824655106]
It is non-trivial to assess a model's generalizability to new, candidate datasets.
We use embedding space geometry to propose a non-parametric framework for detecting distribution shifts.
arXiv Detail & Related papers (2022-02-04T19:00:16Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Evaluating Predictive Uncertainty and Robustness to Distributional Shift
Using Real World Data [0.0]
We propose metrics for general regression tasks using the Shifts Weather Prediction dataset.
We also present an evaluation of the baseline methods using these metrics.
arXiv Detail & Related papers (2021-11-08T17:32:10Z) - An Information-theoretic Approach to Distribution Shifts [9.475039534437332]
Safely deploying machine learning models to the real world is often a challenging process.
Models trained with data obtained from a specific geographic location tend to fail when queried with data obtained elsewhere.
neural networks that are fit to a subset of the population might carry some selection bias into their decision process.
arXiv Detail & Related papers (2021-06-07T16:44:21Z) - BREEDS: Benchmarks for Subpopulation Shift [98.90314444545204]
We develop a methodology for assessing the robustness of models to subpopulation shift.
We leverage the class structure underlying existing datasets to control the data subpopulations that comprise the training and test distributions.
Applying this methodology to the ImageNet dataset, we create a suite of subpopulation shift benchmarks of varying granularity.
arXiv Detail & Related papers (2020-08-11T17:04:47Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.