DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation
- URL: http://arxiv.org/abs/2406.10737v3
- Date: Tue, 11 Feb 2025 16:47:17 GMT
- Title: DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation
- Authors: Yunbei Zhang, Akshay Mehra, Shuaicheng Niu, Jihun Hamm,
- Abstract summary: Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains.<n> DPCore is a method designed for robust performance across diverse domain change patterns.
- Score: 11.151967974753925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains. While existing CTTA methods assume structured domain changes with uniform durations, real-world environments often exhibit dynamic patterns where domains recur with varying frequencies and durations. Current approaches, which adapt the same parameters across different domains, struggle in such dynamic conditions-they face convergence issues with brief domain exposures, risk forgetting previously learned knowledge, or misapplying it to irrelevant domains. To remedy this, we propose DPCore, a method designed for robust performance across diverse domain change patterns while ensuring computational efficiency. DPCore integrates three key components: Visual Prompt Adaptation for efficient domain alignment, a Prompt Coreset for knowledge preservation, and a Dynamic Update mechanism that intelligently adjusts existing prompts for similar domains while creating new ones for substantially different domains. Extensive experiments on four benchmarks demonstrate that DPCore consistently outperforms various CTTA methods, achieving state-of-the-art performance in both structured and dynamic settings while reducing trainable parameters by 99% and computation time by 64% compared to previous approaches.
Related papers
- Visually Similar Pair Alignment for Robust Cross-Domain Object Detection [4.990739968576321]
Domain gaps between training data (source) and real-world environments (target) often degrade the performance of object detection models.
Most existing methods aim to bridge this gap by aligning features across source and target domains but often fail to account for visual differences, such as color or orientation, in alignment pairs.
In this work, we demonstrate for the first time, using a custom-built dataset, that aligning visually similar pairs significantly improves domain adaptation.
arXiv Detail & Related papers (2025-04-09T06:11:11Z) - Exploiting Aggregation and Segregation of Representations for Domain Adaptive Human Pose Estimation [50.31351006532924]
Human pose estimation (HPE) has received increasing attention recently due to its wide application in motion analysis, virtual reality, healthcare, etc.
It suffers from the lack of labeled diverse real-world datasets due to the time- and labor-intensive annotation.
We introduce a novel framework that capitalizes on both representation aggregation and segregation for domain adaptive human pose estimation.
arXiv Detail & Related papers (2024-12-29T17:59:45Z) - Dynamic Prompt Allocation and Tuning for Continual Test-Time Adaptation [29.931721498877483]
Continual test-time adaptation (CTTA) has recently emerged to adapt to continuously evolving target distributions.
Existing methods typically incorporate explicit regularization terms to constrain the variation of model parameters.
We introduce learnable domain-specific prompts that guide the model to adapt to corresponding target domains.
arXiv Detail & Related papers (2024-12-12T14:24:04Z) - Hybrid-TTA: Continual Test-time Adaptation via Dynamic Domain Shift Detection [14.382503104075917]
Continual Test Time Adaptation (CTTA) has emerged as a critical approach for bridging the domain gap between the controlled training environments and the real-world scenarios.
We propose Hybrid-TTA, a holistic approach that dynamically selects instance-wise tuning method for optimal adaptation.
Our approach achieves a notable 1.6%p improvement in mIoU on the Cityscapes-to-ACDC benchmark dataset.
arXiv Detail & Related papers (2024-09-13T06:36:31Z) - Exploring Test-Time Adaptation for Object Detection in Continually Changing Environments [13.163784646113214]
Continual Test-Time Adaptation (CTTA) has recently emerged as a promising technique to gradually adapt a source-trained model to continually changing target domains.
We present AMROD, featuring three core components. Firstly, the object-level contrastive learning module extracts object-level features for contrastive learning to refine the feature representation in the target domain.
Secondly, the adaptive monitoring module dynamically skips unnecessary adaptation and updates the category-specific threshold based on predicted confidence scores to enable efficiency and improve the quality of pseudo-labels.
arXiv Detail & Related papers (2024-06-24T08:30:03Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation [59.1863462632777]
Continual Test Time Adaptation (CTTA) is required to adapt efficiently to continuous unseen domains while retaining previously learned knowledge.
This paper proposes BECoTTA, an input-dependent and efficient modular framework for CTTA.
We validate that our method outperforms multiple CTTA scenarios, including disjoint and gradual domain shits, while only requiring 98% fewer trainable parameters.
arXiv Detail & Related papers (2024-02-13T18:37:53Z) - What, How, and When Should Object Detectors Update in Continually
Changing Test Domains? [34.13756022890991]
Test-time adaptation algorithms have been proposed to adapt the model online while inferring test data.
We propose a novel online adaption approach for object detection in continually changing test domains.
Our approach surpasses baselines on widely used benchmarks, achieving improvements of up to 4.9%p and 7.9%p in mAP.
arXiv Detail & Related papers (2023-12-12T07:13:08Z) - Long-Term Invariant Local Features via Implicit Cross-Domain
Correspondences [79.21515035128832]
We conduct a thorough analysis of the performance of current state-of-the-art feature extraction networks under various domain changes.
We propose a novel data-centric method, Implicit Cross-Domain Correspondences (iCDC)
iCDC represents the same environment with multiple Neural Radiance Fields, each fitting the scene under individual visual domains.
arXiv Detail & Related papers (2023-11-06T18:53:01Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - Test-time Adaptation in the Dynamic World with Compound Domain Knowledge
Management [75.86903206636741]
Test-time adaptation (TTA) allows the model to adapt itself to novel environments and improve its performance during test time.
Several works for TTA have shown promising adaptation performances in continuously changing environments.
This paper first presents a robust TTA framework with compound domain knowledge management.
We then devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
arXiv Detail & Related papers (2022-12-16T09:02:01Z) - Decorate the Newcomers: Visual Domain Prompt for Continual Test Time
Adaptation [14.473807945791132]
Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data.
Motivated by the prompt learning in NLP, in this paper we propose to learn an image-level visual domain prompt for target domains while having the source model parameters frozen.
arXiv Detail & Related papers (2022-12-08T08:56:02Z) - Contrastive Domain Adaptation for Time-Series via Temporal Mixup [14.723714504015483]
We propose a novel lightweight contrastive domain adaptation framework called CoTMix for time-series data.
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
Our approach can significantly outperform all state-of-the-art UDA methods.
arXiv Detail & Related papers (2022-12-03T06:53:38Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Bilevel Online Adaptation for Out-of-Domain Human Mesh Reconstruction [94.25865526414717]
This paper considers a new problem of adapting a pre-trained model of human mesh reconstruction to out-of-domain streaming videos.
We propose Bilevel Online Adaptation, which divides the optimization process of overall multi-objective into two steps of weight probe and weight update in a training.
We demonstrate that BOA leads to state-of-the-art results on two human mesh reconstruction benchmarks.
arXiv Detail & Related papers (2021-03-30T15:47:58Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.