RobustNet: Improving Domain Generalization in Urban-Scene Segmentation
via Instance Selective Whitening
- URL: http://arxiv.org/abs/2103.15597v2
- Date: Wed, 31 Mar 2021 10:56:17 GMT
- Title: RobustNet: Improving Domain Generalization in Urban-Scene Segmentation
via Instance Selective Whitening
- Authors: Sungha Choi, Sanghun Jung, Huiwon Yun, Joanne Kim, Seungryong Kim and
Jaegul Choo
- Abstract summary: Enhancing generalization capability of deep neural networks to unseen domains is crucial for safety-critical applications in the real world such as autonomous driving.
This paper proposes a novel instance selective whitening loss to improve the robustness of the segmentation networks for unseen domains.
- Score: 40.98892593362837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enhancing the generalization capability of deep neural networks to unseen
domains is crucial for safety-critical applications in the real world such as
autonomous driving. To address this issue, this paper proposes a novel instance
selective whitening loss to improve the robustness of the segmentation networks
for unseen domains. Our approach disentangles the domain-specific style and
domain-invariant content encoded in higher-order statistics (i.e., feature
covariance) of the feature representations and selectively removes only the
style information causing domain shift. As shown in Fig. 1, our method provides
reasonable predictions for (a) low-illuminated, (b) rainy, and (c) unseen
structures. These types of images are not included in the training dataset,
where the baseline shows a significant performance drop, contrary to ours.
Being simple yet effective, our approach improves the robustness of various
backbone networks without additional computational cost. We conduct extensive
experiments in urban-scene segmentation and show the superiority of our
approach to existing work. Our code is available at
https://github.com/shachoi/RobustNet.
Related papers
- StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Intra- & Extra-Source Exemplar-Based Style Synthesis for Improved Domain
Generalization [21.591831983223997]
We propose an exemplar-based style synthesis pipeline to improve domain generalization in semantic segmentation.
Our method is based on a novel masked noise encoder for StyleGAN2 inversion.
We achieve up to $12.4%$ mIoU improvements on driving-scene semantic segmentation under different types of data shifts.
arXiv Detail & Related papers (2023-07-02T19:56:43Z) - Single Domain Dynamic Generalization for Iris Presentation Attack
Detection [41.126916126040655]
Iris presentation generalization has achieved great success under intra-domain settings but easily degrades on unseen domains.
We propose a Single Domain Dynamic Generalization (SDDG) framework, which exploits domain-invariant and domain-specific features on a per-sample basis.
The proposed method is effective and outperforms the state-of-the-art on LivDet-Iris 2017 dataset.
arXiv Detail & Related papers (2023-05-22T07:54:13Z) - Adversarial Style Augmentation for Domain Generalization [41.72506801753435]
We introduce a novel Adrial Style Augmentation (ASA) method, which explores broader style spaces by generating more effective statistics perturbation.
To facilitate the application of ASA, we design a simple yet effective module, namely AdvStyle, which instantiates the ASA method in a plug-and-play manner.
Our method significantly outperforms its competitors on the PACS dataset under the single source generalization setting.
arXiv Detail & Related papers (2023-01-30T03:52:16Z) - Self-Training Guided Disentangled Adaptation for Cross-Domain Remote
Sensing Image Semantic Segmentation [20.07907723950031]
We propose a self-training guided disentangled adaptation network (ST-DASegNet) for cross-domain RS image semantic segmentation task.
We first propose source student backbone and target student backbone to respectively extract the source-style and target-style feature for both source and target images.
We then propose a domain disentangled module to extract the universal feature and purify the distinct feature of source-style and target-style features.
arXiv Detail & Related papers (2023-01-13T13:11:22Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.