DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime
Semantic Segmentation
- URL: http://arxiv.org/abs/2104.10834v1
- Date: Thu, 22 Apr 2021 02:49:28 GMT
- Title: DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime
Semantic Segmentation
- Authors: Xinyi Wu, Zhenyao Wu, Hao Guo, Lili Ju, Song Wang
- Abstract summary: We propose a novel domain adaptation network (DANNet) for nighttime semantic segmentation.
It employs an adversarial training with a labeled daytime dataset and an unlabeled dataset that contains coarsely aligned day-night image pairs.
Our method achieves state-of-the-art performance for nighttime semantic segmentation.
- Score: 18.43890050736093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation of nighttime images plays an equally important role as
that of daytime images in autonomous driving, but the former is much more
challenging due to poor illuminations and arduous human annotations. In this
paper, we propose a novel domain adaptation network (DANNet) for nighttime
semantic segmentation without using labeled nighttime image data. It employs an
adversarial training with a labeled daytime dataset and an unlabeled dataset
that contains coarsely aligned day-night image pairs. Specifically, for the
unlabeled day-night image pairs, we use the pixel-level predictions of static
object categories on a daytime image as a pseudo supervision to segment its
counterpart nighttime image. We further design a re-weighting strategy to
handle the inaccuracy caused by misalignment between day-night image pairs and
wrong predictions of daytime images, as well as boost the prediction accuracy
of small objects. The proposed DANNet is the first one stage adaptation
framework for nighttime semantic segmentation, which does not train additional
day-night image transfer models as a separate pre-processing stage. Extensive
experiments on Dark Zurich and Nighttime Driving datasets show that our method
achieves state-of-the-art performance for nighttime semantic segmentation.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - PIG: Prompt Images Guidance for Night-Time Scene Parsing [48.35991796324741]
Unsupervised domain adaptation (UDA) has become the predominant method for studying night scenes.
We propose a Night-Focused Network (NFNet) to learn night-specific features from both target domain images and prompt images.
We conduct experiments on four night-time datasets: NightCity, NightCity+, Dark Zurich, and ACDC.
arXiv Detail & Related papers (2024-06-15T07:06:19Z) - CMDA: Cross-Modality Domain Adaptation for Nighttime Semantic
Segmentation [21.689985575213512]
We propose a novel unsupervised Cross-Modality Domain Adaptation (CMDA) framework to leverage multi-modality (Images and Events) information for nighttime semantic segmentation.
In CMDA, we design the Image Motion-Extractor to extract motion information and the Image Content-Extractor to extract content information from images.
We introduce the first image-event nighttime semantic segmentation dataset.
arXiv Detail & Related papers (2023-07-29T09:29:09Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - Disentangled Contrastive Image Translation for Nighttime Surveillance [87.03178320662592]
Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations.
Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence.
We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day.
This paper contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance.
arXiv Detail & Related papers (2023-07-11T06:40:27Z) - LoopDA: Constructing Self-loops to Adapt Nighttime Semantic Segmentation [5.961294477200831]
We propose LoopDA for domain adaptive nighttime semantic segmentation.
Our model outperforms prior methods on Dark Zurich and Nighttime Driving datasets for semantic segmentation.
arXiv Detail & Related papers (2022-11-21T21:46:05Z) - GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video
and GPS data [15.430918080412518]
Nighttime semantic segmentation is especially challenging due to a lack of annotated nighttime images.
We propose a novel GPS-based training framework for nighttime semantic segmentation.
Experimental results demonstrate the effectiveness of the proposed method on several nighttime semantic segmentation datasets.
arXiv Detail & Related papers (2022-07-27T05:05:04Z) - Cross-Domain Correlation Distillation for Unsupervised Domain Adaptation
in Nighttime Semantic Segmentation [17.874336775904272]
We propose a novel domain adaptation framework via cross-domain correlation distillation, called CCDistill.
We extract the content and style knowledge contained in features, calculate the degree of inherent or illumination difference between two images.
Experiments on Dark Zurich and ACDC demonstrate that CCDistill achieves the state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2022-05-02T12:42:04Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation
for Semantic Nighttime Image Segmentation [107.33492779588641]
We develop a curriculum framework to adapt semantic segmentation models from day to night without using nighttime annotations.
We also design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images.
arXiv Detail & Related papers (2020-05-28T16:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.