DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation
- URL: http://arxiv.org/abs/2406.10737v3
- Date: Tue, 11 Feb 2025 16:47:17 GMT
- Title: DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation
- Authors: Yunbei Zhang, Akshay Mehra, Shuaicheng Niu, Jihun Hamm,
- Abstract summary: Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains.
DPCore is a method designed for robust performance across diverse domain change patterns.
- Score: 11.151967974753925
- License:
- Abstract: Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains. While existing CTTA methods assume structured domain changes with uniform durations, real-world environments often exhibit dynamic patterns where domains recur with varying frequencies and durations. Current approaches, which adapt the same parameters across different domains, struggle in such dynamic conditions-they face convergence issues with brief domain exposures, risk forgetting previously learned knowledge, or misapplying it to irrelevant domains. To remedy this, we propose DPCore, a method designed for robust performance across diverse domain change patterns while ensuring computational efficiency. DPCore integrates three key components: Visual Prompt Adaptation for efficient domain alignment, a Prompt Coreset for knowledge preservation, and a Dynamic Update mechanism that intelligently adjusts existing prompts for similar domains while creating new ones for substantially different domains. Extensive experiments on four benchmarks demonstrate that DPCore consistently outperforms various CTTA methods, achieving state-of-the-art performance in both structured and dynamic settings while reducing trainable parameters by 99% and computation time by 64% compared to previous approaches.
Related papers
- Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation [50.31351006532924]
Human pose estimation (HPE) has received increasing attention recently due to its wide application in motion analysis, virtual reality, healthcare, etc.
It suffers from the lack of labeled diverse real-world datasets due to the time- and labor-intensive annotation.
We introduce a novel framework that capitalizes on both representation aggregation and segregation for domain adaptive human pose estimation.
arXiv Detail & Related papers (2024-12-29T17:59:45Z) - Dynamic Prompt Allocation and Tuning for Continual Test-Time Adaptation [29.931721498877483]
Continual test-time adaptation (CTTA) has recently emerged to adapt to continuously evolving target distributions.
Existing methods typically incorporate explicit regularization terms to constrain the variation of model parameters.
We introduce learnable domain-specific prompts that guide the model to adapt to corresponding target domains.
arXiv Detail & Related papers (2024-12-12T14:24:04Z) - Hybrid-TTA: Continual Test-time Adaptation via Dynamic Domain Shift Detection [14.382503104075917]
Continual Test Time Adaptation (CTTA) has emerged as a critical approach for bridging the domain gap between the controlled training environments and the real-world scenarios.
We propose Hybrid-TTA, a holistic approach that dynamically selects instance-wise tuning method for optimal adaptation.
Our approach achieves a notable 1.6%p improvement in mIoU on the Cityscapes-to-ACDC benchmark dataset.
arXiv Detail & Related papers (2024-09-13T06:36:31Z) - BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation [59.1863462632777]
Continual Test Time Adaptation (CTTA) is required to adapt efficiently to continuous unseen domains while retaining previously learned knowledge.
This paper proposes BECoTTA, an input-dependent and efficient modular framework for CTTA.
We validate that our method outperforms multiple CTTA scenarios, including disjoint and gradual domain shits, while only requiring 98% fewer trainable parameters.
arXiv Detail & Related papers (2024-02-13T18:37:53Z) - Long-Term Invariant Local Features via Implicit Cross-Domain
Correspondences [79.21515035128832]
We conduct a thorough analysis of the performance of current state-of-the-art feature extraction networks under various domain changes.
We propose a novel data-centric method, Implicit Cross-Domain Correspondences (iCDC)
iCDC represents the same environment with multiple Neural Radiance Fields, each fitting the scene under individual visual domains.
arXiv Detail & Related papers (2023-11-06T18:53:01Z) - Contrastive Domain Adaptation for Time-Series via Temporal Mixup [14.723714504015483]
We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
arXiv Detail & Related papers (2022-12-03T06:53:38Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.