Test-time Adaptation in the Dynamic World with Compound Domain Knowledge
Management
- URL: http://arxiv.org/abs/2212.08356v3
- Date: Sat, 15 Apr 2023 04:03:04 GMT
- Title: Test-time Adaptation in the Dynamic World with Compound Domain Knowledge
Management
- Authors: Junha Song, Kwanyong Park, InKyu Shin, Sanghyun Woo, Chaoning Zhang,
and In So Kweon
- Abstract summary: Test-time adaptation (TTA) allows the model to adapt itself to novel environments and improve its performance during test time.
Several works for TTA have shown promising adaptation performances in continuously changing environments.
This paper first presents a robust TTA framework with compound domain knowledge management.
We then devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
- Score: 75.86903206636741
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior to the deployment of robotic systems, pre-training the deep-recognition
models on all potential visual cases is infeasible in practice. Hence,
test-time adaptation (TTA) allows the model to adapt itself to novel
environments and improve its performance during test time (i.e., lifelong
adaptation). Several works for TTA have shown promising adaptation performances
in continuously changing environments. However, our investigation reveals that
existing methods are vulnerable to dynamic distributional changes and often
lead to overfitting of TTA models. To address this problem, this paper first
presents a robust TTA framework with compound domain knowledge management. Our
framework helps the TTA model to harvest the knowledge of multiple
representative domains (i.e., compound domain) and conduct the TTA based on the
compound domain knowledge. In addition, to prevent overfitting of the TTA
model, we devise novel regularization which modulates the adaptation rates
using domain-similarity between the source and the current target domain. With
the synergy of the proposed framework and regularization, we achieve consistent
performance improvements in diverse TTA scenarios, especially on dynamic domain
shifts. We demonstrate the generality of proposals via extensive experiments
including image classification on ImageNet-C and semantic segmentation on GTA5,
C-driving, and corrupted Cityscapes datasets.
Related papers
- Analytic Continual Test-Time Adaptation for Multi-Modality Corruption [23.545997349882857]
Test-Time Adaptation (TTA) aims to help pre-trained models bridge the gap between source and target datasets.
We propose a novel approach, Multi-modality Dynamic Analytic Adapter (MDAA) for MM-CTTA tasks.
MDAA achieves state-of-the-art performance on MM-CTTA while ensuring reliable model adaptation.
arXiv Detail & Related papers (2024-10-29T01:21:24Z) - Active Test-Time Adaptation: Theoretical Analyses and An Algorithm [51.84691955495693]
Test-time adaptation (TTA) addresses distribution shifts for streaming test data in unsupervised settings.
We propose the novel problem setting of active test-time adaptation (ATTA) that integrates active learning within the fully TTA setting.
arXiv Detail & Related papers (2024-04-07T22:31:34Z) - BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation [59.1863462632777]
Continual Test Time Adaptation (CTTA) is required to adapt efficiently to continuous unseen domains while retaining previously learned knowledge.
This paper proposes BECoTTA, an input-dependent and efficient modular framework for CTTA.
We validate that our method outperforms multiple CTTA scenarios, including disjoint and gradual domain shits, while only requiring 98% fewer trainable parameters.
arXiv Detail & Related papers (2024-02-13T18:37:53Z) - Persistent Test-time Adaptation in Recurring Testing Scenarios [12.024233973321756]
Current test-time adaptation (TTA) approaches aim to adapt a machine learning model to environments that change continuously.
Yet, it is unclear whether TTA methods can maintain their adaptability over prolonged periods.
We propose persistent TTA (PeTTA) which senses when the model is diverging towards collapse and adjusts the adaptation strategy.
arXiv Detail & Related papers (2023-11-30T02:24:44Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - AR-TTA: A Simple Method for Real-World Continual Test-Time Adaptation [1.4530711901349282]
We propose to validate test-time adaptation methods using datasets for autonomous driving, namely CLAD-C and SHIFT.
We observe that current test-time adaptation methods struggle to effectively handle varying degrees of domain shift.
We enhance the well-established self-training framework by incorporating a small memory buffer to increase model stability.
arXiv Detail & Related papers (2023-09-18T19:34:23Z) - Benchmarking Test-Time Adaptation against Distribution Shifts in Image
Classification [77.0114672086012]
Test-time adaptation (TTA) is a technique aimed at enhancing the generalization performance of models by leveraging unlabeled samples solely during prediction.
We present a benchmark that systematically evaluates 13 prominent TTA methods and their variants on five widely used image classification datasets.
arXiv Detail & Related papers (2023-07-06T16:59:53Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - Universal Test-time Adaptation through Weight Ensembling, Diversity
Weighting, and Prior Correction [3.5139431332194198]
Test-time adaptation (TTA) continues to update the model after deployment, leveraging the current test data.
We identify and highlight several challenges a self-training based method has to deal with.
To prevent the model from becoming biased, we leverage a dataset and model-agnostic certainty and diversity weighting.
arXiv Detail & Related papers (2023-06-01T13:16:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.