Beyond Model Adaptation at Test Time: A Survey
- URL: http://arxiv.org/abs/2411.03687v1
- Date: Wed, 06 Nov 2024 06:13:57 GMT
- Title: Beyond Model Adaptation at Test Time: A Survey
- Authors: Zehao Xiao, Cees G. M. Snoek,
- Abstract summary: Machine learning algorithms struggle when samples in the test distribution start to deviate from the ones observed during training.
Test-time adaptation combines the benefits of domain adaptation and domain generalization by training models only on source data.
We provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers.
- Score: 43.03129492126422
- License:
- Abstract: Machine learning algorithms have achieved remarkable success across various disciplines, use cases and applications, under the prevailing assumption that training and test samples are drawn from the same distribution. Consequently, these algorithms struggle and become brittle even when samples in the test distribution start to deviate from the ones observed during training. Domain adaptation and domain generalization have been studied extensively as approaches to address distribution shifts across test and train domains, but each has its limitations. Test-time adaptation, a recently emerging learning paradigm, combines the benefits of domain adaptation and domain generalization by training models only on source data and adapting them to target data during test-time inference. In this survey, we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers. We structure our review by categorizing existing methods into five distinct categories based on what component of the method is adjusted for test-time adaptation: the model, the inference, the normalization, the sample, or the prompt, providing detailed analysis of each. We further discuss the various preparation and adaptation settings for methods within these categories, offering deeper insights into the effective deployment for the evaluation of distribution shifts and their real-world application in understanding images, video and 3D, as well as modalities beyond vision. We close the survey with an outlook on emerging research opportunities for test-time adaptation.
Related papers
- A Lost Opportunity for Vision-Language Models: A Comparative Study of Online Test-Time Adaptation for Vision-Language Models [3.0495235326282186]
In deep learning, maintaining robustness against distribution shifts is critical.
This work explores a broad range of possibilities to adapt vision-language foundation models at test-time.
arXiv Detail & Related papers (2024-05-23T18:27:07Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Test-Time Adaptation with Shape Moments for Image Segmentation [16.794050614196916]
We investigate test-time single-subject adaptation for segmentation.
We propose a Shape-guided Entropy Minimization objective for tackling this task.
We show the potential of integrating various shape priors to guide adaptation to plausible solutions.
arXiv Detail & Related papers (2022-05-16T20:47:13Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Unified Regularity Measures for Sample-wise Learning and Generalization [18.10522585996242]
We propose a pair of sample regularity measures for both processes with a formulation-consistent representation.
Experiments validated the effectiveness and robustness of the proposed approaches for mini-batch SGD optimization.
arXiv Detail & Related papers (2021-08-09T10:11:14Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.