Beyond the training set: an intuitive method for detecting distribution
shift in model-based optimization
- URL: http://arxiv.org/abs/2311.05363v1
- Date: Thu, 9 Nov 2023 13:44:28 GMT
- Title: Beyond the training set: an intuitive method for detecting distribution
shift in model-based optimization
- Authors: Farhan Damani, David H Brookes, Theodore Sternlieb, Cameron Webster,
Stephen Malina, Rishi Jajoo, Kathy Lin, Sam Sinai
- Abstract summary: A common scenario involves using a fixed training set to train models, with the goal of designing new samples that outperform those present in the training data.
A major challenge in this setting is distribution shift, where the distributions of training and design samples are different.
We propose a straightforward method for design practitioners that detects distribution shifts.
- Score: 0.4188114563181614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Model-based optimization (MBO) is increasingly applied to design problems in
science and engineering. A common scenario involves using a fixed training set
to train models, with the goal of designing new samples that outperform those
present in the training data. A major challenge in this setting is distribution
shift, where the distributions of training and design samples are different.
While some shift is expected, as the goal is to create better designs, this
change can negatively affect model accuracy and subsequently, design quality.
Despite the widespread nature of this problem, addressing it demands deep
domain knowledge and artful application. To tackle this issue, we propose a
straightforward method for design practitioners that detects distribution
shifts. This method trains a binary classifier using knowledge of the unlabeled
design distribution to separate the training data from the design data. The
classifier's logit scores are then used as a proxy measure of distribution
shift. We validate our method in a real-world application by running offline
MBO and evaluate the effect of distribution shift on design quality. We find
that the intensity of the shift in the design distribution varies based on the
number of steps taken by the optimization algorithm, and our simple approach
can identify these shifts. This enables users to constrain their search to
regions where the model's predictions are reliable, thereby increasing the
quality of designs.
Related papers
- Design Editing for Offline Model-based Optimization [18.701760631151316]
offline model-based optimization (MBO) aims to maximize a black-box objective function using only an offline dataset of designs and scores.
A common approach involves training a surrogate model using existing designs and their corresponding scores, and then generating new designs through gradient-based updates with respect to the surrogate model.
This method suffers from the out-of-distribution issue, where the surrogate model may erroneously predict high scores for unseen designs.
We introduce a novel method, Design Editing for Offline Model-based Optimization (DEMO), which leverages a diffusion prior to calibrate overly optimized designs.
arXiv Detail & Related papers (2024-05-22T20:00:19Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Distributionally Robust Post-hoc Classifiers under Prior Shifts [31.237674771958165]
We investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors.
We present an extremely lightweight post-hoc approach that performs scaling adjustments to predictions from a pre-trained model.
arXiv Detail & Related papers (2023-09-16T00:54:57Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Learning Neural Models for Natural Language Processing in the Face of
Distributional Shift [10.990447273771592]
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications.
It builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time.
This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information.
It is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime
arXiv Detail & Related papers (2021-09-03T14:29:20Z) - Style Curriculum Learning for Robust Medical Image Segmentation [62.02435329931057]
Deep segmentation models often degrade due to distribution shifts in image intensities between the training and test data sets.
We propose a novel framework to ensure robust segmentation in the presence of such distribution shifts.
arXiv Detail & Related papers (2021-08-01T08:56:24Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - WILDS: A Benchmark of in-the-Wild Distribution Shifts [157.53410583509924]
Distribution shifts can substantially degrade the accuracy of machine learning systems deployed in the wild.
We present WILDS, a curated collection of 8 benchmark datasets that reflect a diverse range of distribution shifts.
We show that standard training results in substantially lower out-of-distribution than in-distribution performance.
arXiv Detail & Related papers (2020-12-14T11:14:56Z) - Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual
Model-Based Reinforcement Learning [109.74041512359476]
We study a number of design decisions for the predictive model in visual MBRL algorithms.
We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance.
We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks will perform the same as the best-performing models when trained on the same training data.
arXiv Detail & Related papers (2020-12-08T18:03:21Z) - Robust Federated Learning: The Case of Affine Distribution Shifts [41.27887358989414]
We develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users' samples.
We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user.
arXiv Detail & Related papers (2020-06-16T03:43:59Z) - Incremental Unsupervised Domain-Adversarial Training of Neural Networks [17.91571291302582]
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
Here we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively.
Our results report a clear improvement with respect to the non-incremental case in several datasets, also outperforming other state-of-the-art domain adaptation algorithms.
arXiv Detail & Related papers (2020-01-13T09:54:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.