Diversity-aware Buffer for Coping with Temporally Correlated Data
Streams in Online Test-time Adaptation
- URL: http://arxiv.org/abs/2401.00989v1
- Date: Tue, 2 Jan 2024 01:56:25 GMT
- Title: Diversity-aware Buffer for Coping with Temporally Correlated Data
Streams in Online Test-time Adaptation
- Authors: Mario D\"obler, Florian Marencke, Robert A. Marsden, Bin Yang
- Abstract summary: Test data streams are not always independent and identically distributed (i.i.d.)
We propose a diversity-aware and category-balanced buffer that can simulate an i.i.d. data stream, even in non-i.i.d. scenarios.
We achieve state-of-the-art results on most considered benchmarks.
- Score: 3.1265626879839923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since distribution shifts are likely to occur after a model's deployment and
can drastically decrease the model's performance, online test-time adaptation
(TTA) continues to update the model during test-time, leveraging the current
test data. In real-world scenarios, test data streams are not always
independent and identically distributed (i.i.d.). Instead, they are frequently
temporally correlated, making them non-i.i.d. Many existing methods struggle to
cope with this scenario. In response, we propose a diversity-aware and
category-balanced buffer that can simulate an i.i.d. data stream, even in
non-i.i.d. scenarios. Combined with a diversity and entropy-weighted entropy
loss, we show that a stable adaptation is possible on a wide range of
corruptions and natural domain shifts, based on ImageNet. We achieve
state-of-the-art results on most considered benchmarks.
Related papers
- DOTA: Distributional Test-Time Adaptation of Vision-Language Models [52.98590762456236]
Training-free test-time dynamic adapter (TDA) is a promising approach to address this issue.
We propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota)
Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment.
arXiv Detail & Related papers (2024-09-28T15:03:28Z) - Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams [19.921480334048756]
Test-Time Adaptation (TTA) enables adaptation and inference in test data streams with domain shifts from the source.
We propose a novel Distribution Alignment loss for TTA.
We surpass existing methods in non-i.i.d. scenarios and maintain competitive performance under the ideal i.i.d. assumption.
arXiv Detail & Related papers (2024-07-16T19:33:23Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - Generalized Robust Test-Time Adaptation in Continuous Dynamic Scenarios [18.527640606971563]
Test-time adaptation (TTA) adapts pre-trained models to test distributions during the inference phase exclusively employing unlabeled test data streams.
We propose a Generalized Robust Test-Time Adaptation (GRoTTA) method to effectively address the difficult problem.
arXiv Detail & Related papers (2023-10-07T07:13:49Z) - AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation [1.4530711901349282]
We propose to validate test-time adaptation methods using datasets for autonomous driving, namely CLAD-C and SHIFT.
We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift.
We enhance the well-established self-training framework by incorporating a small memory buffer to increase model stability.
arXiv Detail & Related papers (2023-09-18T19:34:23Z) - Robust Test-Time Adaptation in Dynamic Scenarios [9.475271284789969]
Test-time adaptation (TTA) intends to adapt the pretrained model to test distributions with only unlabeled test data streams.
We elaborate a Robust Test-Time Adaptation (RoTTA) method against the complex data stream in PTTA.
Our method is easy to implement, making it a good choice for rapid deployment.
arXiv Detail & Related papers (2023-03-24T10:19:14Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Robust Continual Test-time Adaptation: Instance-aware BN and
Prediction-balanced Memory [58.72445309519892]
We present a new test-time adaptation scheme that is robust against non-i.i.d. test data streams.
Our novelty is mainly two-fold: (a) Instance-Aware Batch Normalization (IABN) that corrects normalization for out-of-distribution samples, and (b) Prediction-balanced Reservoir Sampling (PBRS) that simulates i.i.d. data stream from non-i.i.d. stream in a class-balanced manner.
arXiv Detail & Related papers (2022-08-10T03:05:46Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.