DELTA: degradation-free fully test-time adaptation
- URL: http://arxiv.org/abs/2301.13018v1
- Date: Mon, 30 Jan 2023 15:54:00 GMT
- Title: DELTA: degradation-free fully test-time adaptation
- Authors: Bowen Zhao, Chen Chen, Shu-Tao Xia
- Abstract summary: We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
- Score: 59.74287982885375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully test-time adaptation aims at adapting a pre-trained model to the test
stream during real-time inference, which is urgently required when the test
distribution differs from the training distribution. Several efforts have been
devoted to improving adaptation performance. However, we find that two
unfavorable defects are concealed in the prevalent adaptation methodologies
like test-time batch normalization (BN) and self-learning. First, we reveal
that the normalization statistics in test-time BN are completely affected by
the currently received test samples, resulting in inaccurate estimates. Second,
we show that during test-time adaptation, the parameter update is biased
towards some dominant classes. In addition to the extensively studied test
stream with independent and class-balanced samples, we further observe that the
defects can be exacerbated in more complicated test environments, such as
(time) dependent or class-imbalanced data. We observe that previous approaches
work well in certain scenarios while show performance degradation in others due
to their faults. In this paper, we provide a plug-in solution called DELTA for
Degradation-freE fuLly Test-time Adaptation, which consists of two components:
(i) Test-time Batch Renormalization (TBR), introduced to improve the estimated
normalization statistics. (ii) Dynamic Online re-weighTing (DOT), designed to
address the class bias within optimization. We investigate various test-time
adaptation methods on three commonly used datasets with four scenarios, and a
newly introduced real-world dataset. DELTA can help them deal with all
scenarios simultaneously, leading to SOTA performance.
Related papers
- Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting [55.17761802332469]
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample.
Prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications.
We propose an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples.
arXiv Detail & Related papers (2024-03-18T05:49:45Z) - Generalized Robust Test-Time Adaptation in Continuous Dynamic Scenarios [18.527640606971563]
Test-time adaptation (TTA) adapts pre-trained models to test distributions during the inference phase exclusively employing unlabeled test data streams.
We propose a Generalized Robust Test-Time Adaptation (GRoTTA) method to effectively address the difficult problem.
arXiv Detail & Related papers (2023-10-07T07:13:49Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - Universal Test-time Adaptation through Weight Ensembling, Diversity
Weighting, and Prior Correction [3.5139431332194198]
Test-time adaptation (TTA) continues to update the model after deployment, leveraging the current test data.
We identify and highlight several challenges a self-training based method has to deal with.
To prevent the model from becoming biased, we leverage a dataset and model-agnostic certainty and diversity weighting.
arXiv Detail & Related papers (2023-06-01T13:16:10Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.