Class-aware Domain Knowledge Fusion and Fission for Continual Test-Time Adaptation
- URL: http://arxiv.org/abs/2510.12150v1
- Date: Tue, 14 Oct 2025 05:09:50 GMT
- Title: Class-aware Domain Knowledge Fusion and Fission for Continual Test-Time Adaptation
- Authors: Jiahuan Zhou, Chao Zhu, Zhenyu Cui, Zichen Liu, Xu Zou, Gang Hua,
- Abstract summary: We propose a class-aware domain Knowledge Fusion and Fission method for continual test-time adaptation.<n>A domain Knowledge FIssion (KFI) module is designed to adaptively separate new domain knowledge from a paired class-aware domain prompt pool.<n>A domain Knowledge FUsion (KFU) module is further designed to merge the fissioned new knowledge into the existing knowledge pool with minimal cost.
- Score: 50.831196928686104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Test-Time Adaptation (CTTA) aims to quickly fine-tune the model during the test phase so that it can adapt to multiple unknown downstream domain distributions without pre-acquiring downstream domain data. To this end, existing advanced CTTA methods mainly reduce the catastrophic forgetting of historical knowledge caused by irregular switching of downstream domain data by restoring the initial model or reusing historical models. However, these methods are usually accompanied by serious insufficient learning of new knowledge and interference from potentially harmful historical knowledge, resulting in severe performance degradation. To this end, we propose a class-aware domain Knowledge Fusion and Fission method for continual test-time adaptation, called KFF, which adaptively expands and merges class-aware domain knowledge in old and new domains according to the test-time data from different domains, where discriminative historical knowledge can be dynamically accumulated. Specifically, considering the huge domain gap within streaming data, a domain Knowledge FIssion (KFI) module is designed to adaptively separate new domain knowledge from a paired class-aware domain prompt pool, alleviating the impact of negative knowledge brought by old domains that are distinct from the current domain. Besides, to avoid the cumulative computation and storage overheads from continuously fissioning new knowledge, a domain Knowledge FUsion (KFU) module is further designed to merge the fissioned new knowledge into the existing knowledge pool with minimal cost, where a greedy knowledge dynamic merging strategy is designed to improve the compatibility of new and old knowledge while keeping the computational efficiency. Extensive experiments on the ImageNet-C dataset verify the effectiveness of our proposed method against other methods.
Related papers
- Out-of-Context Misinformation Detection via Variational Domain-Invariant Learning with Test-Time Training [7.447483980331488]
Out-of-context misinformation (OOC) is a low-cost form of misinformation in news reports.<n>We propose textbfVDT to enhance the domain adaptation capability for OOC misinformation detection.
arXiv Detail & Related papers (2025-11-13T11:34:26Z) - EReLiFM: Evidential Reliability-Aware Residual Flow Meta-Learning for Open-Set Domain Generalization under Noisy Labels [85.78886153628663]
Open-Set Domain Generalization aims to enable deep learning models to recognize unseen categories in new domains.<n>Label noise hinders open-set domain generalization by corrupting source-domain knowledge.<n>We propose Evidential Reliability-Aware Residual Flow Meta-Learning (EReLiFM) to bridge domain gaps.
arXiv Detail & Related papers (2025-10-14T16:23:11Z) - Learn Faster and Remember More: Balancing Exploration and Exploitation for Continual Test-time Adaptation [42.08969745752455]
Continual Test-Time Adaptation (CTTA) aims to adapt a source pre-trained model to continually changing target domains during inference.<n>This paper proposes a mean teacher framework that strikes an appropriate balance between Exploration and Exploitation.
arXiv Detail & Related papers (2025-08-18T06:08:56Z) - Towards Federated Domain Unlearning: Verification Methodologies and Challenges [34.9987941096371]
We present the first comprehensive empirical study on Federated Domain Unlearning.
Our findings reveal that unlearning disproportionately affects the model's deeper layers.
We propose novel evaluation methodologies tailored for Federated Domain Unlearning.
arXiv Detail & Related papers (2024-06-05T09:05:55Z) - Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Decorate the Newcomers: Visual Domain Prompt for Continual Test Time
Adaptation [14.473807945791132]
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data.
Motivated by the prompt learning in NLP, in this paper we propose to learn an image-level visual domain prompt for target domains while having the source model parameters frozen.
arXiv Detail & Related papers (2022-12-08T08:56:02Z) - LLEDA -- Lifelong Self-Supervised Domain Adaptation [9.71137838903781]
Humans and animals have the ability to continuously learn new information over their lifetime without losing previously acquired knowledge.
New information conflicting with old knowledge, resulting in catastrophic forgetting.
The proposed Lifelong Self-Supervised Domain Adaptation (LLEDA) framework draws inspiration from the CLS theory and mimics the interaction between two networks.
LLEDA's latent replay technique facilitates communication between these two networks by reactivating and replaying the past memory latent representations to stabilise long-term generalisation and retention without interfering with the previously learned information.
arXiv Detail & Related papers (2022-11-12T10:12:17Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.<n>We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.