Continual Domain Adaptation through Pruning-aided Domain-specific Weight
Modulation
- URL: http://arxiv.org/abs/2304.07560v1
- Date: Sat, 15 Apr 2023 13:44:58 GMT
- Title: Continual Domain Adaptation through Pruning-aided Domain-specific Weight
Modulation
- Authors: Prasanna B, Sunandini Sanyal, R. Venkatesh Babu
- Abstract summary: We develop a method to address unsupervised domain adaptation (UDA) in a practical setting of continual learning (CL)
The goal is to update the model on continually changing domains while preserving domain-specific knowledge to prevent catastrophic forgetting of past-seen domains.
- Score: 37.3981662593942
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose to develop a method to address unsupervised domain
adaptation (UDA) in a practical setting of continual learning (CL). The goal is
to update the model on continually changing domains while preserving
domain-specific knowledge to prevent catastrophic forgetting of past-seen
domains. To this end, we build a framework for preserving domain-specific
features utilizing the inherent model capacity via pruning. We also perform
effective inference using a novel batch-norm based metric to predict the final
model parameters to be used accurately. Our approach achieves not only
state-of-the-art performance but also prevents catastrophic forgetting of past
domains significantly. Our code is made publicly available.
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - ReMask: A Robust Information-Masking Approach for Domain Counterfactual
Generation [16.275230631985824]
Domain counterfactual generation aims to transform a text from the source domain to a given target domain.
We employ a three-step domain obfuscation approach that involves frequency and attention norm-based masking, to mask domain-specific cues, and unmasking to regain the domain generic context.
Our model outperforms the state-of-the-art by achieving 1.4% average accuracy improvement in the adversarial domain adaptation setting.
arXiv Detail & Related papers (2023-05-04T14:19:02Z) - Domain-incremental Cardiac Image Segmentation with Style-oriented Replay
and Domain-sensitive Feature Whitening [67.6394526631557]
M&Ms should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by.
In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy.
We propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization.
arXiv Detail & Related papers (2022-11-09T13:07:36Z) - Feed-Forward Latent Domain Adaptation [17.71179872529747]
We study a new highly-practical problem setting that enables resource-constrained edge devices to adapt a pre-trained model to their local data distributions.
Considering limitations of edge devices, we aim to only use a pre-trained model and adapt it in a feed-forward way, without using back-propagation and without access to the source data.
Our solution is to meta-learn a network capable of embedding the mixed-relevance target dataset and dynamically adapting inference for target examples using cross-attention.
arXiv Detail & Related papers (2022-07-15T17:37:42Z) - Towards Online Domain Adaptive Object Detection [79.89082006155135]
Existing object detection models assume both the training and test data are sampled from the same source domain.
We propose a novel unified adaptation framework that adapts and improves generalization on the target domain in online settings.
arXiv Detail & Related papers (2022-04-11T17:47:22Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Self-supervised Autoregressive Domain Adaptation for Time Series Data [9.75443057146649]
Unsupervised domain adaptation (UDA) has successfully addressed the domain shift problem for visual applications.
These approaches may have limited performance for time series data due to the following reasons.
We propose a Self-supervised Autoregressive Domain Adaptation (SLARDA) framework to address these limitations.
arXiv Detail & Related papers (2021-11-29T08:17:23Z) - Feature Stylization and Domain-aware Contrastive Learning for Domain
Generalization [10.027279853737511]
Domain generalization aims to enhance the model against domain shift without accessing the target domain.
We propose a novel framework where feature statistics are utilized for stylizing original features to ones with novel domain properties.
We achieve the feature consistency with the proposed domain-aware supervised contrastive loss.
arXiv Detail & Related papers (2021-08-19T10:04:01Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.