Generative appearance replay for continual unsupervised domain
adaptation
- URL: http://arxiv.org/abs/2301.01211v1
- Date: Tue, 3 Jan 2023 17:04:05 GMT
- Title: Generative appearance replay for continual unsupervised domain
adaptation
- Authors: Boqi Chen, Kevin Thandiackal, Pushpak Pati, Orcun Goksel
- Abstract summary: GarDA is a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data.
We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
- Score: 4.623578780480946
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning models can achieve high accuracy when trained on large amounts
of labeled data. However, real-world scenarios often involve several
challenges: Training data may become available in installments, may originate
from multiple different domains, and may not contain labels for training.
Certain settings, for instance medical applications, often involve further
restrictions that prohibit retention of previously seen data due to privacy
regulations. In this work, to address such challenges, we study unsupervised
segmentation in continual learning scenarios that involve domain shift. To that
end, we introduce GarDA (Generative Appearance Replay for continual Domain
Adaptation), a generative-replay based approach that can adapt a segmentation
model sequentially to new domains with unlabeled data. In contrast to
single-step unsupervised domain adaptation (UDA), continual adaptation to a
sequence of domains enables leveraging and consolidation of information from
multiple domains. Unlike previous approaches in incremental UDA, our method
does not require access to previously seen data, making it applicable in many
practical scenarios. We evaluate GarDA on two datasets with different organs
and modalities, where it substantially outperforms existing techniques.
Related papers
- Multi-Target Unsupervised Domain Adaptation for Semantic Segmentation without External Data [25.386114973556406]
Multi-target unsupervised domain adaptation (UDA) aims to learn a unified model to address the domain shift between multiple target domains.
Most existing solutions require labeled data from the source domain and unlabeled data from multiple target domains concurrently during training.
We introduce a new strategy called "multi-target UDA without external data" for semantic segmentation.
arXiv Detail & Related papers (2024-05-10T14:29:51Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Multi-scale Feature Alignment for Continual Learning of Unlabeled
Domains [3.9498537297431167]
generative feature-driven image replay in conjunction with a dual-purpose discriminator enables the generation of images with realistic features for replay.
We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task.
arXiv Detail & Related papers (2023-02-02T18:19:01Z) - CONDA: Continual Unsupervised Domain Adaptation Learning in Visual Perception for Self-Driving Cars [11.479857808195774]
We propose a Continual Unsupervised Domain Adaptation (CONDA) approach that allows the model to continuously learn and adapt with respect to the presence of the new data.
To avoid the catastrophic forgetting problem and maintain the performance of the segmentation models, we present a novel Bijective Maximum Likelihood loss.
arXiv Detail & Related papers (2022-12-01T16:15:54Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Online Unsupervised Domain Adaptation for Person Re-identification [4.48123023008698]
We present a new yet practical online setting for Unsupervised Domain Adaptation for person Re-ID.
We adapt and evaluate the state-of-the-art UDA algorithms on this new online setting using the well-known Market-1501, Duke, and MSMT17 benchmarks.
arXiv Detail & Related papers (2022-05-09T15:36:08Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Multi-Domain Adversarial Feature Generalization for Person
Re-Identification [52.835955258959785]
We propose a multi-dataset feature generalization network (MMFA-AAE)
It is capable of learning a universal domain-invariant feature representation from multiple labeled datasets and generalizing it to unseen' camera systems.
It also surpasses many state-of-the-art supervised methods and unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2020-11-25T08:03:15Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.