Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation
- URL: http://arxiv.org/abs/2201.06974v1
- Date: Tue, 18 Jan 2022 13:31:19 GMT
- Title: Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation
- Authors: Donald Shenaj, Francesco Barbato, Umberto Michieli, Pietro Zanuttigh
- Abstract summary: Deep neural networks are typically trained in a single shot for a specific task and data distribution.
In real world settings both the task and the domain of application can change.
We introduce the novel task of coarse-to-fine learning of semantic segmentation architectures in presence of domain shift.
- Score: 22.366638308792734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are typically trained in a single shot for a specific
task and data distribution, but in real world settings both the task and the
domain of application can change. The problem becomes even more challenging in
dense predictive tasks, such as semantic segmentation, and furthermore most
approaches tackle the two problems separately. In this paper we introduce the
novel task of coarse-to-fine learning of semantic segmentation architectures in
presence of domain shift. We consider subsequent learning stages progressively
refining the task at the semantic level; i.e., the finer set of semantic labels
at each learning step is hierarchically derived from the coarser set of the
previous step. We propose a new approach (CCDA) to tackle this scenario. First,
we employ the maximum squares loss to align source and target domains and, at
the same time, to balance the gradients between well-classified and harder
samples. Second, we introduce a novel coarse-to-fine knowledge distillation
constraint to transfer network capabilities acquired on a coarser set of labels
to a set of finer labels. Finally, we design a coarse-to-fine weight
initialization rule to spread the importance from each coarse class to the
respective finer classes. To evaluate our approach, we design two benchmarks
where source knowledge is extracted from the GTA5 dataset and it is transferred
to either the Cityscapes or the IDD datasets, and we show how it outperforms
the main competitors.
Related papers
- ContextSeg: Sketch Semantic Segmentation by Querying the Context with Attention [7.783971241874388]
This paper presents ContextSeg - a simple yet highly effective approach to tackling this problem with two stages.
In the first stage, to better encode the shape and positional information of strokes, we propose to predict an extra dense distance field in an autoencoder network.
In the second stage, we treat an entire stroke as a single entity and label a group of strokes within the same semantic part using an auto-regressive Transformer with the default attention mechanism.
arXiv Detail & Related papers (2023-11-28T10:53:55Z) - Bi-level Alignment for Cross-Domain Crowd Counting [113.78303285148041]
Current methods rely on external data for training an auxiliary task or apply an expensive coarse-to-fine estimation.
We develop a new adversarial learning based method, which is simple and efficient to apply.
We evaluate our approach on five real-world crowd counting benchmarks, where we outperform existing approaches by a large margin.
arXiv Detail & Related papers (2022-05-12T02:23:25Z) - Self-semantic contour adaptation for cross modality brain tumor
segmentation [13.260109561599904]
We propose exploiting low-level edge information to facilitate the adaptation as a precursor task.
The precise contour then provides spatial information to guide the semantic adaptation.
We evaluate our framework on the BraTS2018 database for cross-modality segmentation of brain tumors.
arXiv Detail & Related papers (2022-01-13T15:16:55Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with
Self-Supervised Depth Estimation [94.16816278191477]
We present a framework for semi-adaptive and domain-supervised semantic segmentation.
It is enhanced by self-supervised monocular depth estimation trained only on unlabeled image sequences.
We validate the proposed model on the Cityscapes dataset.
arXiv Detail & Related papers (2021-08-28T01:33:38Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Semi-supervised Domain Adaptation based on Dual-level Domain Mixing for
Semantic Segmentation [34.790169990156684]
We focus on a more practical setting of semi-supervised domain adaptation (SSDA) where both a small set of labeled target data and large amounts of labeled source data are available.
Two kinds of data mixing methods are proposed to reduce domain gap in both region-level and sample-level respectively.
We can obtain two complementary domain-mixed teachers based on dual-level mixed data from holistic and partial views respectively.
arXiv Detail & Related papers (2021-03-08T12:33:17Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z) - GradMix: Multi-source Transfer across Domains and Tasks [33.98368732653684]
GradMix is a model-agnostic method applicable to any model trained with gradient-based learning rule.
We conduct MS-DTT experiments on two tasks: digit recognition and action recognition.
arXiv Detail & Related papers (2020-02-09T02:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.