ProCST: Boosting Semantic Segmentation using Progressive Cyclic
Style-Transfer
- URL: http://arxiv.org/abs/2204.11891v1
- Date: Mon, 25 Apr 2022 18:01:05 GMT
- Title: ProCST: Boosting Semantic Segmentation using Progressive Cyclic
Style-Transfer
- Authors: Shahaf Ettedgui, Shady Abu-Hussein, Raja Giryes
- Abstract summary: We propose a novel two-stage framework for improving domain adaptation techniques.
In the first step, we progressively train a multi-scale neural network to perform an initial transfer between the source data to the target data.
This new data has a reduced domain gap from the desired target domain, and the applied UDA approach further closes the gap.
- Score: 38.03127458140549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using synthetic data for training neural networks that achieve good
performance on real-world data is an important task as it has the potential to
reduce the need for costly data annotation. Yet, a network that is trained on
synthetic data alone does not perform well on real data due to the domain gap
between the two. Reducing this gap, also known as domain adaptation, has been
widely studied in recent years. In the unsupervised domain adaptation (UDA)
framework, unlabeled real data is used during training with labeled synthetic
data to obtain a neural network that performs well on real data. In this work,
we focus on image data. For the semantic segmentation task, it has been shown
that performing image-to-image translation from source to target, and then
training a network for segmentation on source annotations - leads to poor
results. Therefore a joint training of both is essential, which has been a
common practice in many techniques. Yet, closing the large domain gap between
the source and the target by directly performing the adaptation between the two
is challenging. In this work, we propose a novel two-stage framework for
improving domain adaptation techniques. In the first step, we progressively
train a multi-scale neural network to perform an initial transfer between the
source data to the target data. We denote the new transformed data as "Source
in Target" (SiT). Then, we use the generated SiT data as the input to any
standard UDA approach. This new data has a reduced domain gap from the desired
target domain, and the applied UDA approach further closes the gap. We
demonstrate the improvement achieved by our framework with two state-of-the-art
methods for semantic segmentation, DAFormer and ProDA, on two UDA tasks, GTA5
to Cityscapes and Synthia to Cityscapes. Code and state-of-the-art checkpoints
of ProCST+DAFormer are provided.
Related papers
- Robust Source-Free Domain Adaptation for Fundus Image Segmentation [3.585032903685044]
Unlabelled Domain Adaptation (UDA) is a learning technique that transfers knowledge learned in the source domain from labelled data to the target domain with only unlabelled data.
In this study, we propose a two-stage training stage for robust domain adaptation.
We propose a novel robust pseudo-label and pseudo-boundary (PLPB) method, which effectively utilizes unlabeled target data to generate pseudo labels and pseudo boundaries.
arXiv Detail & Related papers (2023-10-25T14:25:18Z) - Threshold-adaptive Unsupervised Focal Loss for Domain Adaptation of
Semantic Segmentation [25.626882426111198]
Unsupervised domain adaptation (UDA) for semantic segmentation has recently gained increasing research attention.
In this paper, we propose a novel two-stage entropy-based UDA method for semantic segmentation.
Our method achieves state-of-the-art 58.4% and 59.6% mIoUs on SYNTHIA-to-Cityscapes and GTA5-to-Cityscapes using DeepLabV2 and competitive performance using the lightweight BiSeNet.
arXiv Detail & Related papers (2022-08-23T03:48:48Z) - Bi-level Alignment for Cross-Domain Crowd Counting [113.78303285148041]
Current methods rely on external data for training an auxiliary task or apply an expensive coarse-to-fine estimation.
We develop a new adversarial learning based method, which is simple and efficient to apply.
We evaluate our approach on five real-world crowd counting benchmarks, where we outperform existing approaches by a large margin.
arXiv Detail & Related papers (2022-05-12T02:23:25Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z) - Self domain adapted network [6.040230864736051]
Domain shift is a major problem for deploying deep networks in clinical practice.
We propose a novel self domain adapted network (SDA-Net) that can rapidly adapt itself to a single test subject.
arXiv Detail & Related papers (2020-07-07T01:41:34Z) - Unsupervised Intra-domain Adaptation for Semantic Segmentation through
Self-Supervision [73.76277367528657]
Convolutional neural network-based approaches have achieved remarkable progress in semantic segmentation.
To cope with this limitation, automatically annotated data generated from graphic engines are used to train segmentation models.
We propose a two-step self-supervised domain adaptation approach to minimize the inter-domain and intra-domain gap together.
arXiv Detail & Related papers (2020-04-16T15:24:11Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.