Building Resilience to Out-of-Distribution Visual Data via Input
Optimization and Model Finetuning
- URL: http://arxiv.org/abs/2211.16228v1
- Date: Tue, 29 Nov 2022 14:06:35 GMT
- Title: Building Resilience to Out-of-Distribution Visual Data via Input
Optimization and Model Finetuning
- Authors: Christopher J. Holder, Majid Khonji, Jorge Dias, Muhammad Shafique
- Abstract summary: We propose a preprocessing model that learns to optimise input data for a specific target vision model.
We investigate several out-of-distribution scenarios in the context of semantic segmentation for autonomous vehicles.
We demonstrate that our approach can enable performance on such data comparable to that of a finetuned model.
- Score: 13.804184845195296
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A major challenge in machine learning is resilience to out-of-distribution
data, that is data that exists outside of the distribution of a model's
training data. Training is often performed using limited, carefully curated
datasets and so when a model is deployed there is often a significant
distribution shift as edge cases and anomalies not included in the training
data are encountered. To address this, we propose the Input Optimisation
Network, an image preprocessing model that learns to optimise input data for a
specific target vision model. In this work we investigate several
out-of-distribution scenarios in the context of semantic segmentation for
autonomous vehicles, comparing an Input Optimisation based solution to existing
approaches of finetuning the target model with augmented training data and an
adversarially trained preprocessing model. We demonstrate that our approach can
enable performance on such data comparable to that of a finetuned model, and
subsequently that a combined approach, whereby an input optimization network is
optimised to target a finetuned model, delivers superior performance to either
method in isolation. Finally, we propose a joint optimisation approach, in
which input optimization network and target model are trained simultaneously,
which we demonstrate achieves significant further performance gains,
particularly in challenging edge-case scenarios. We also demonstrate that our
architecture can be reduced to a relatively compact size without a significant
performance impact, potentially facilitating real time embedded applications.
Related papers
- Adjusting Pretrained Backbones for Performativity [34.390793811659556]
We propose a novel technique to adjust pretrained backbones for performativity in a modular way.
We show how it leads to smaller loss along the retraining trajectory and enables us to effectively select among candidate models to anticipate performance degradations.
arXiv Detail & Related papers (2024-10-06T14:41:13Z) - Data Shapley in One Training Run [88.59484417202454]
Data Shapley provides a principled framework for attributing data's contribution within machine learning contexts.
Existing approaches require re-training models on different data subsets, which is computationally intensive.
This paper introduces In-Run Data Shapley, which addresses these limitations by offering scalable data attribution for a target model of interest.
arXiv Detail & Related papers (2024-06-16T17:09:24Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient [52.2669490431145]
PropEn is inspired by'matching', which enables implicit guidance without training a discriminator.
We show that training with a matched dataset approximates the gradient of the property of interest while remaining within the data distribution.
arXiv Detail & Related papers (2024-05-28T11:30:19Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Data-Driven Offline Decision-Making via Invariant Representation
Learning [97.49309949598505]
offline data-driven decision-making involves synthesizing optimized decisions with no active interaction.
A key challenge is distributional shift: when we optimize with respect to the input into a model trained from offline data, it is easy to produce an out-of-distribution (OOD) input that appears erroneously good.
In this paper, we formulate offline data-driven decision-making as domain adaptation, where the goal is to make accurate predictions for the value of optimized decisions.
arXiv Detail & Related papers (2022-11-21T11:01:37Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Careful! Training Relevance is Real [0.7742297876120561]
We propose constraints designed to enforce training relevance.
We show through a collection of experimental results that adding the suggested constraints significantly improves the quality of solutions.
arXiv Detail & Related papers (2022-01-12T11:54:31Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.