Test time Adaptation through Perturbation Robustness
- URL: http://arxiv.org/abs/2110.10232v1
- Date: Tue, 19 Oct 2021 20:00:58 GMT
- Title: Test time Adaptation through Perturbation Robustness
- Authors: Prabhu Teja Sivaprasad, Fran\c{c}ois Fleuret
- Abstract summary: We tackle the problem of adapting to domain shift at inference time.
We do not change the training process, but quickly adapt the model at test-time to handle any domain shift.
Our method is at par or significantly outperforms previous methods.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data samples generated by several real world processes are dynamic in nature
\textit{i.e.}, their characteristics vary with time. Thus it is not possible to
train and tackle all possible distributional shifts between training and
inference, using the host of transfer learning methods in literature. In this
paper, we tackle this problem of adapting to domain shift at inference time
\textit{i.e.}, we do not change the training process, but quickly adapt the
model at test-time to handle any domain shift. For this, we propose to enforce
consistency of predictions of data sampled in the vicinity of test sample on
the image manifold. On a host of test scenarios like dealing with corruptions
(CIFAR-10-C and CIFAR-100-C), and domain adaptation (VisDA-C), our method is at
par or significantly outperforms previous methods.
Related papers
- Temporal Test-Time Adaptation with State-Space Models [4.248760709042802]
Adapting a model on test samples can help mitigate this drop in performance.
Most test-time adaptation methods have focused on synthetic corruption shifts.
We propose STAD, a probabilistic state-space model that adapts a deployed model to temporal distribution shifts.
arXiv Detail & Related papers (2024-07-17T11:18:49Z) - Align Your Prompts: Test-Time Prompting with Distribution Alignment for
Zero-Shot Generalization [64.62570402941387]
We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain.
Our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe.
arXiv Detail & Related papers (2023-11-02T17:59:32Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Evaluating Continual Test-Time Adaptation for Contextual and Semantic
Domain Shifts [3.4161707164978137]
We adapt a pre-trained Convolutional Neural Network to domain shifts at test time.
We evaluate the state of the art on two realistic and challenging sources of domain shifts, namely contextual and semantic shifts.
Test-time adaptation methods perform better and forget less on contextual shifts compared to semantic shifts.
arXiv Detail & Related papers (2022-08-18T11:05:55Z) - Gradual Test-Time Adaptation by Self-Training and Style Transfer [5.110894308882439]
We show the natural connection between gradual domain adaptation and test-time adaptation.
We propose a new method based on self-training and style transfer.
We show the effectiveness of our method on the continual and gradual CIFAR10C, CIFAR100C, and ImageNet-C benchmark.
arXiv Detail & Related papers (2022-08-16T13:12:19Z) - IDANI: Inference-time Domain Adaptation via Neuron-level Interventions [24.60778570114818]
We propose a new approach for domain adaptation (DA), using neuron-level interventions.
We modify the representation of each test example in specific neurons, resulting in a counterfactual example from the source domain.
Our experiments show that our method improves performance on unseen domains.
arXiv Detail & Related papers (2022-06-01T06:39:28Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.