Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You Need
- URL: http://arxiv.org/abs/2505.23744v1
- Date: Thu, 29 May 2025 17:58:57 GMT
- Title: Boosting Domain Incremental Learning: Selecting the Optimal Parameters is All You Need
- Authors: Qiang Wang, Xiang Song, Yuhang He, Jizhou Han, Chenhao Ding, Xinyuan Gao, Yihong Gong,
- Abstract summary: Domain Incremental Learning (DIL) offers a solution by enabling continual model adaptation.<n>Existing PIDIL methods struggle with parameter selection accuracy as the number of domains and corresponding classes grows.<n>We propose SOYO, a lightweight framework that improves domain selection in PIDIL.
- Score: 25.45880299215022
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) often underperform in real-world, dynamic settings where data distributions change over time. Domain Incremental Learning (DIL) offers a solution by enabling continual model adaptation, with Parameter-Isolation DIL (PIDIL) emerging as a promising paradigm to reduce knowledge conflicts. However, existing PIDIL methods struggle with parameter selection accuracy, especially as the number of domains and corresponding classes grows. To address this, we propose SOYO, a lightweight framework that improves domain selection in PIDIL. SOYO introduces a Gaussian Mixture Compressor (GMC) and Domain Feature Resampler (DFR) to store and balance prior domain data efficiently, while a Multi-level Domain Feature Fusion Network (MDFN) enhances domain feature extraction. Our framework supports multiple Parameter-Efficient Fine-Tuning (PEFT) methods and is validated across tasks such as image classification, object detection, and speech enhancement. Experimental results on six benchmarks demonstrate SOYO's consistent superiority over existing baselines, showcasing its robustness and adaptability in complex, evolving environments. The codes will be released in https://github.com/qwangcv/SOYO.
Related papers
- Continual Adaptation: Environment-Conditional Parameter Generation for Object Detection in Dynamic Scenarios [54.58186816693791]
environments constantly change over time and space, posing significant challenges for object detectors trained based on a closed-set assumption.<n>We propose a new mechanism, converting the fine-tuning process to a specific- parameter generation.<n>In particular, we first design a dual-path LoRA-based domain-aware adapter that disentangles features into domain-invariant and domain-specific components.
arXiv Detail & Related papers (2025-06-30T17:14:12Z) - PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning [54.99373314906667]
Self-supervised representation learning for point cloud has demonstrated effectiveness in improving pre-trained model performance across diverse tasks.<n>As pre-trained models grow in complexity, fully fine-tuning them for downstream applications demands substantial computational and storage resources.<n>We propose PointLoRA, a simple yet effective method that combines low-rank adaptation (LoRA) with multi-scale token selection to efficiently fine-tune point cloud models.
arXiv Detail & Related papers (2025-04-22T16:41:21Z) - Dynamic Prompt Allocation and Tuning for Continual Test-Time Adaptation [29.931721498877483]
Continual test-time adaptation (CTTA) has recently emerged to adapt to continuously evolving target distributions.<n>Existing methods typically incorporate explicit regularization terms to constrain the variation of model parameters.<n>We introduce learnable domain-specific prompts that guide the model to adapt to corresponding target domains.
arXiv Detail & Related papers (2024-12-12T14:24:04Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.<n>Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation [59.41178047749177]
We focus on multi-domain Neural Machine Translation, with the goal of developing efficient models which can handle data from various domains seen during training and are robust to domains unseen during training.
We hypothesize that Sparse Mixture-of-Experts (SMoE) models are a good fit for this task, as they enable efficient model scaling.
We conduct a series of experiments aimed at validating the utility of SMoE for the multi-domain scenario, and find that a straightforward width scaling of Transformer is a simpler and surprisingly more efficient approach in practice, and reaches the same performance level as SMoE.
arXiv Detail & Related papers (2024-07-01T09:45:22Z) - DPCore: Dynamic Prompt Coreset for Continual Test-Time Adaptation [11.151967974753925]
Continual Test-Time Adaptation (CTTA) seeks to adapt source pre-trained models to continually changing, unseen target domains.<n> DPCore is a method designed for robust performance across diverse domain change patterns.
arXiv Detail & Related papers (2024-06-15T20:47:38Z) - Agile Multi-Source-Free Domain Adaptation [25.06352660046911]
Bi-level ATtention ENsemble (Bi-ATEN) module learns both intra-domain weights and inter-domain ensemble weights to achieve a fine balance between instance specificity and domain consistency.
We achieve comparable or even superior performance on a challenging benchmark DomainNet with less than 3% trained parameters and 8 times of throughput compared with SOTA method.
arXiv Detail & Related papers (2024-03-08T05:17:10Z) - Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation [86.02485817444216]
We introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA.
MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts.
Experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
arXiv Detail & Related papers (2022-09-30T03:40:10Z) - TDACNN: Target-domain-free Domain Adaptation Convolutional Neural
Network for Drift Compensation in Gas Sensors [6.451060076703026]
In this paper, deep learning based on a target-domain-free domain adaptation convolutional neural network (TDACNN) is proposed.
The main concept is that CNNs extract not only the domain-specific features of samples but also the domain-invariant features underlying both the source and target domains.
Experiments on two datasets drift under different settings demonstrate the superiority of TDACNN compared with several state-of-the-art methods.
arXiv Detail & Related papers (2021-10-14T16:30:17Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.