Continual Unsupervised Domain Adaptation for Semantic Segmentation
- URL: http://arxiv.org/abs/2010.09236v2
- Date: Mon, 11 Oct 2021 05:58:11 GMT
- Title: Continual Unsupervised Domain Adaptation for Semantic Segmentation
- Authors: Joonhyuk Kim, Sahng-Min Yoo, Gyeong-Moon Park, Jong-Hwan Kim
- Abstract summary: Unsupervised Domain Adaptation (UDA) for semantic segmentation has been favorably applied to real-world scenarios in which pixel-level labels are hard to be obtained.
We propose Continual UDA for semantic segmentation based on a newly designed Expanding Target-specific Memory (ETM) framework.
Our novel ETM framework contains Target-specific Memory (TM) for each target domain to alleviate catastrophic forgetting.
- Score: 14.160280479726921
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) for semantic segmentation has been
favorably applied to real-world scenarios in which pixel-level labels are hard
to be obtained. In most of the existing UDA methods, all target data are
assumed to be introduced simultaneously. Yet, the data are usually presented
sequentially in the real world. Moreover, Continual UDA, which deals with more
practical scenarios with multiple target domains in the continual learning
setting, has not been actively explored. In this light, we propose Continual
UDA for semantic segmentation based on a newly designed Expanding
Target-specific Memory (ETM) framework. Our novel ETM framework contains
Target-specific Memory (TM) for each target domain to alleviate catastrophic
forgetting. Furthermore, a proposed Double Hinge Adversarial (DHA) loss leads
the network to produce better UDA performance overall. Our design of the TM and
training objectives let the semantic segmentation network adapt to the current
target domain while preserving the knowledge learned on previous target
domains. The model with the proposed framework outperforms other
state-of-the-art models in continual learning settings on standard benchmarks
such as GTA5, SYNTHIA, CityScapes, IDD, and Cross-City datasets. The source
code is available at https://github.com/joonh-kim/ETM.
Related papers
- PiPa++: Towards Unification of Domain Adaptive Semantic Segmentation via Self-supervised Learning [34.786268652516355]
Unsupervised domain adaptive segmentation aims to improve the segmentation accuracy of models on target domains without relying on labeled data from those domains.
It seeks to align the feature representations of the source domain (where labeled data is available) and the target domain (where only unlabeled data is present)
arXiv Detail & Related papers (2024-07-24T08:53:29Z) - Divide, Ensemble and Conquer: The Last Mile on Unsupervised Domain Adaptation for On-Board Semantic Segmentation [7.658092990342648]
We propose DEC, a flexible UDA framework for multi-source datasets.
Following a divide-and-conquer strategy, DEC simplifies the task by categorizing semantic classes, training models for each category, and fusing their outputs by an ensemble model trained exclusively on synthetic datasets to obtain the final segmentation mask.
DEC can integrate with existing UDA methods, achieving state-of-the-art performance on Cityscapes, BDD100K, and Mapillary Vistas.
arXiv Detail & Related papers (2024-06-27T00:54:11Z) - Joint semi-supervised and contrastive learning enables zero-shot domain-adaptation and multi-domain segmentation [1.5393913074555419]
SegCLR is a versatile framework designed to segment volumetric images across different domains.
We demonstrate the superior performance of SegCLR through a comprehensive evaluation.
arXiv Detail & Related papers (2024-05-08T18:10:59Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - Discover, Hallucinate, and Adapt: Open Compound Domain Adaptation for
Semantic Segmentation [91.30558794056056]
Unsupervised domain adaptation (UDA) for semantic segmentation has been attracting attention recently.
We present a novel framework based on three main design principles: discover, hallucinate, and adapt.
We evaluate our solution on standard benchmark GTA to C-driving, and achieved new state-of-the-art results.
arXiv Detail & Related papers (2021-10-08T13:20:09Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.