DiCoTTA: Domain-invariant Learning for Continual Test-time Adaptation
- URL: http://arxiv.org/abs/2504.04981v1
- Date: Mon, 07 Apr 2025 12:09:18 GMT
- Title: DiCoTTA: Domain-invariant Learning for Continual Test-time Adaptation
- Authors: Sohyun Lee, Nayeong Kim, Juwon Kang, Seong Joon Oh, Suha Kwak,
- Abstract summary: We present a novel online domain-invariant learning framework for continual test-time adaptation (CTTA)<n>We propose a new model architecture and a test-time adaptation strategy dedicated to learning domain-invariant features without corrupting semantic contents.<n>DiCoTTA achieved state-of-the-art performance on four public CTTA benchmarks.
- Score: 39.7909410173315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies continual test-time adaptation (CTTA), the task of adapting a model to constantly changing unseen domains in testing while preserving previously learned knowledge. Existing CTTA methods mostly focus on adaptation to the current test domain only, overlooking generalization to arbitrary test domains a model may face in the future. To tackle this limitation, we present a novel online domain-invariant learning framework for CTTA, dubbed DiCoTTA. DiCoTTA aims to learn feature representation to be invariant to both current and previous test domains on the fly during testing. To this end, we propose a new model architecture and a test-time adaptation strategy dedicated to learning domain-invariant features without corrupting semantic contents, along with a new data structure and optimization algorithm for effectively managing information from previous test domains. DiCoTTA achieved state-of-the-art performance on four public CTTA benchmarks. Moreover, it showed superior generalization to unseen test domains.
Related papers
- Context-Aware Self-Adaptation for Domain Generalization [32.094290282897894]
Domain generalization aims at developing suitable learning algorithms in source training domains.<n>We present a novel two-stage approach called Context-Aware Self-Adaptation (CASA) for domain generalization.
arXiv Detail & Related papers (2025-04-03T22:33:38Z) - DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation [11.151967974753925]
Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains.<n> DPCore is a method designed for robust performance across diverse domain change patterns.
arXiv Detail & Related papers (2024-06-15T20:47:38Z) - Boosting Large Language Models with Continual Learning for Aspect-based Sentiment Analysis [33.86086075084374]
Aspect-based sentiment analysis (ABSA) is an important subtask of sentiment analysis.
We propose a Large Language Model-based Continual Learning (textttLLM-CL) model for ABSA.
arXiv Detail & Related papers (2024-05-09T02:00:07Z) - What, How, and When Should Object Detectors Update in Continually
Changing Test Domains? [34.13756022890991]
Test-time adaptation algorithms have been proposed to adapt the model online while inferring test data.
We propose a novel online adaption approach for object detection in continually changing test domains.
Our approach surpasses baselines on widely used benchmarks, achieving improvements of up to 4.9%p and 7.9%p in mAP.
arXiv Detail & Related papers (2023-12-12T07:13:08Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [117.72709110877939]
Test-time adaptation (TTA) has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.<n>We categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Test-time Adaptation in the Dynamic World with Compound Domain Knowledge
Management [75.86903206636741]
Test-time adaptation (TTA) allows the model to adapt itself to novel environments and improve its performance during test time.
Several works for TTA have shown promising adaptation performances in continuously changing environments.
This paper first presents a robust TTA framework with compound domain knowledge management.
We then devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
arXiv Detail & Related papers (2022-12-16T09:02:01Z) - TTTFlow: Unsupervised Test-Time Training with Normalizing Flow [18.121961548745112]
A major problem of deep neural networks for image classification is their vulnerability to domain changes at test-time.
Recent methods have proposed to address this problem with test-time training (TTT), where a two-branch model is trained to learn a main classification task and also a self-supervised task used to perform test-time adaptation.
We propose TTTFlow: a Y-shaped architecture using an unsupervised head based on Normalizing Flows to learn the normal distribution of latent features and detect domain shifts in test examples.
arXiv Detail & Related papers (2022-10-20T16:32:06Z) - TeST: Test-time Self-Training under Distribution Shift [99.68465267994783]
Test-Time Self-Training (TeST) is a technique that takes as input a model trained on some source data and a novel data distribution at test time.
We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms.
arXiv Detail & Related papers (2022-09-23T07:47:33Z) - Gradual Test-Time Adaptation by Self-Training and Style Transfer [5.110894308882439]
We show the natural connection between gradual domain adaptation and test-time adaptation.
We propose a new method based on self-training and style transfer.
We show the effectiveness of our method on the continual and gradual CIFAR10C, CIFAR100C, and ImageNet-C benchmark.
arXiv Detail & Related papers (2022-08-16T13:12:19Z) - Learning Instance-Specific Adaptation for Cross-Domain Segmentation [79.61787982393238]
We propose a test-time adaptation method for cross-domain image segmentation.
Given a new unseen instance at test time, we adapt a pre-trained model by conducting instance-specific BatchNorm calibration.
arXiv Detail & Related papers (2022-03-30T17:59:45Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.