Towards Cross-Domain Continual Learning
- URL: http://arxiv.org/abs/2402.12490v1
- Date: Mon, 19 Feb 2024 19:54:03 GMT
- Title: Towards Cross-Domain Continual Learning
- Authors: Marcus de Carvalho, Mahardhika Pratama, Jie Zhang, Chua Haoyan, Edward
Yapp
- Abstract summary: We introduce a novel approach called Cross-Domain Continual Learning (CDCL)
Our method combines inter- and intra-task cross-attention mechanisms within a compact convolutional network.
By leveraging an intra-task-specific pseudo-labeling method, we ensure accurate input pairs for both labeled and unlabeled samples.
- Score: 8.22291258264193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning is a process that involves training learning agents to
sequentially master a stream of tasks or classes without revisiting past data.
The challenge lies in leveraging previously acquired knowledge to learn new
tasks efficiently, while avoiding catastrophic forgetting. Existing methods
primarily focus on single domains, restricting their applicability to specific
problems.
In this work, we introduce a novel approach called Cross-Domain Continual
Learning (CDCL) that addresses the limitations of being limited to single
supervised domains. Our method combines inter- and intra-task cross-attention
mechanisms within a compact convolutional network. This integration enables the
model to maintain alignment with features from previous tasks, thereby delaying
the data drift that may occur between tasks, while performing unsupervised
cross-domain (UDA) between related domains. By leveraging an
intra-task-specific pseudo-labeling method, we ensure accurate input pairs for
both labeled and unlabeled samples, enhancing the learning process. To validate
our approach, we conduct extensive experiments on public UDA datasets,
showcasing its positive performance on cross-domain continual learning
challenges. Additionally, our work introduces incremental ideas that contribute
to the advancement of this field.
We make our code and models available to encourage further exploration and
reproduction of our results: \url{https://github.com/Ivsucram/CDCL}
Related papers
- Overcoming Domain Drift in Online Continual Learning [24.86094018430407]
Online Continual Learning (OCL) empowers machine learning models to acquire new knowledge online across a sequence of tasks.
OCL faces a significant challenge: catastrophic forgetting, wherein the model learned in previous tasks is substantially overwritten upon encountering new tasks.
We propose a novel rehearsal strategy, Drift-Reducing Rehearsal (DRR), to anchor the domain of old tasks and reduce the negative transfer effects.
arXiv Detail & Related papers (2024-05-15T06:57:18Z) - DAWN: Domain-Adaptive Weakly Supervised Nuclei Segmentation via Cross-Task Interactions [17.68742587885609]
Current weakly supervised nuclei segmentation approaches follow a two-stage pseudo-label generation and network training process.
This paper introduces a novel domain-adaptive weakly supervised nuclei segmentation framework using cross-task interaction strategies.
To validate the effectiveness of our proposed method, we conduct extensive comparative and ablation experiments on six datasets.
arXiv Detail & Related papers (2024-04-23T12:01:21Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Learning with Style: Continual Semantic Segmentation Across Tasks and
Domains [25.137859989323537]
Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem.
We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces.
We show how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
arXiv Detail & Related papers (2022-10-13T13:24:34Z) - Feature Representation Learning for Unsupervised Cross-domain Image
Retrieval [73.3152060987961]
Current supervised cross-domain image retrieval methods can achieve excellent performance.
The cost of data collection and labeling imposes an intractable barrier to practical deployment in real applications.
We introduce a new cluster-wise contrastive learning mechanism to help extract class semantic-aware features.
arXiv Detail & Related papers (2022-07-20T07:52:14Z) - Interval Bound Interpolation for Few-shot Learning with Few Tasks [15.85259386116784]
Few-shot learning aims to transfer the knowledge acquired from training on a diverse set of tasks to unseen tasks with a limited amount of labeled data.
We introduce the notion of interval bounds from the provably robust training literature to few-shot learning.
We then use a novel strategy to artificially form new tasks for training by interpolating between the available tasks and their respective interval bounds.
arXiv Detail & Related papers (2022-04-07T15:29:27Z) - Self-Taught Cross-Domain Few-Shot Learning with Weakly Supervised Object
Localization and Task-Decomposition [84.24343796075316]
We propose a task-expansion-decomposition framework for Cross-Domain Few-Shot Learning.
The proposed Self-Taught (ST) approach alleviates the problem of non-target guidance by constructing task-oriented metric spaces.
We conduct experiments under the cross-domain setting including 8 target domains: CUB, Cars, Places, Plantae, CropDieases, EuroSAT, ISIC, and ChestX.
arXiv Detail & Related papers (2021-09-03T04:23:07Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Data-efficient Weakly-supervised Learning for On-line Object Detection
under Domain Shift in Robotics [24.878465999976594]
Several object detection methods have been proposed in the literature, the vast majority based on Deep Convolutional Neural Networks (DCNNs)
These methods have important limitations for robotics: Learning solely on off-line data may introduce biases, and prevents adaptation to novel tasks.
In this work, we investigate how weakly-supervised learning can cope with these problems.
arXiv Detail & Related papers (2020-12-28T16:36:11Z) - Learning Task-oriented Disentangled Representations for Unsupervised
Domain Adaptation [165.61511788237485]
Unsupervised domain adaptation (UDA) aims to address the domain-shift problem between a labeled source domain and an unlabeled target domain.
We propose a dynamic task-oriented disentangling network (DTDN) to learn disentangled representations in an end-to-end fashion for UDA.
arXiv Detail & Related papers (2020-07-27T01:21:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.