A Unified Approach to Domain Incremental Learning with Memory: Theory
and Algorithm
- URL: http://arxiv.org/abs/2310.12244v1
- Date: Wed, 18 Oct 2023 18:30:07 GMT
- Title: A Unified Approach to Domain Incremental Learning with Memory: Theory
and Algorithm
- Authors: Haizhou Shi, Hao Wang
- Abstract summary: We propose a unified framework, dubbed Unified Domain Incremental Learning (UDIL), for domain incremental learning with memory.
Our UDIL **unifies** various existing methods, and our theoretical analysis shows that UDIL always achieves a tighter generalization error bound compared to these methods.
Empirical results show that our UDIL outperforms the state-of-the-art domain incremental learning methods on both synthetic and real-world datasets.
- Score: 7.919690718820747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain incremental learning aims to adapt to a sequence of domains with
access to only a small subset of data (i.e., memory) from previous domains.
Various methods have been proposed for this problem, but it is still unclear
how they are related and when practitioners should choose one method over
another. In response, we propose a unified framework, dubbed Unified Domain
Incremental Learning (UDIL), for domain incremental learning with memory. Our
UDIL **unifies** various existing methods, and our theoretical analysis shows
that UDIL always achieves a tighter generalization error bound compared to
these methods. The key insight is that different existing methods correspond to
our bound with different **fixed** coefficients; based on insights from this
unification, our UDIL allows **adaptive** coefficients during training, thereby
always achieving the tightest bound. Empirical results show that our UDIL
outperforms the state-of-the-art domain incremental learning methods on both
synthetic and real-world datasets. Code will be available at
https://github.com/Wang-ML-Lab/unified-continual-learning.
Related papers
- Sequential Editing for Lifelong Training of Speech Recognition Models [10.770491329674401]
Fine-tuning solely on new domain risks Catastrophic Forgetting (CF)
We propose Sequential Model Editing as a novel method to continually learn new domains in ASR systems.
Our study demonstrates up to 15% Word Error Rate Reduction (WERR) over fine-tuning baseline, and superior efficiency over other LLL techniques on CommonVoice English multi-accent dataset.
arXiv Detail & Related papers (2024-06-25T20:52:09Z) - MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - FIXED: Frustratingly Easy Domain Generalization with Mixup [53.782029033068675]
Domain generalization (DG) aims to learn a generalizable model from multiple training domains such that it can perform well on unseen target domains.
A popular strategy is to augment training data to benefit generalization through methods such as Mixupcitezhang 2018mixup.
We propose a simple yet effective enhancement for Mixup-based DG, namely domain-invariant Feature mIXup (FIX)
Our approach significantly outperforms nine state-of-the-art related methods, beating the best performing baseline by 6.5% on average in terms of test accuracy.
arXiv Detail & Related papers (2022-11-07T09:38:34Z) - Adversarial Feature Augmentation for Cross-domain Few-shot
Classification [2.68796389443975]
We propose a novel adversarial feature augmentation (AFA) method to bridge the domain gap in few-shot learning.
The proposed method is a plug-and-play module that can be easily integrated into existing few-shot learning methods.
arXiv Detail & Related papers (2022-08-23T15:10:22Z) - Learning Phone Recognition from Unpaired Audio and Phone Sequences Based
on Generative Adversarial Network [58.82343017711883]
This paper investigates how to learn directly from unpaired phone sequences and speech utterances.
GAN training is adopted in the first stage to find the mapping relationship between unpaired speech and phone sequence.
In the second stage, another HMM model is introduced to train from the generator's output, which boosts the performance.
arXiv Detail & Related papers (2022-07-29T09:29:28Z) - Balancing Multi-Domain Corpora Learning for Open-Domain Response
Generation [3.3242685629646256]
Open-domain conversational systems are assumed to generate equally good responses on multiple domains.
This paper explores methods of generating relevant responses for each of multiple multi-domain corpora.
arXiv Detail & Related papers (2022-05-05T11:10:54Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Universal Representation Learning from Multiple Domains for Few-shot
Classification [41.821234589075445]
We propose to learn a single set of universal deep representations by distilling knowledge of multiple separately trained networks.
We show that the universal representations can be further refined for previously unseen domains by an efficient adaptation step.
arXiv Detail & Related papers (2021-03-25T13:49:12Z) - Domain Generalization in Biosignal Classification [37.70077538403524]
This study is the first to investigate domain generalization for biosignal data.
Our proposed method achieves accuracy gains of up to 16% for four completely unseen domains.
arXiv Detail & Related papers (2020-11-12T05:15:46Z) - Towards Domain-Agnostic Contrastive Learning [103.40783553846751]
We propose a novel domain-agnostic approach to contrastive learning, named DACL.
Key to our approach is the use of Mixup noise to create similar and dissimilar examples by mixing data samples differently either at the input or hidden-state levels.
Our results show that DACL not only outperforms other domain-agnostic noising methods, such as Gaussian-noise, but also combines well with domain-specific methods, such as SimCLR.
arXiv Detail & Related papers (2020-11-09T13:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.