NightLab: A Dual-level Architecture with Hardness Detection for
Segmentation at Night
- URL: http://arxiv.org/abs/2204.05538v1
- Date: Tue, 12 Apr 2022 05:50:22 GMT
- Title: NightLab: A Dual-level Architecture with Hardness Detection for
Segmentation at Night
- Authors: Xueqing Deng, Peng Wang, Xiaochen Lian, Shawn Newsam
- Abstract summary: We propose NightLab, a novel nighttime segmentation framework.
Models at two levels of granularity, i.e. image and regional, and each level is composed of light adaptation and segmentation modules.
Experiments on the NightCity and BDD100K datasets show NightLab achieves State-of-The-Art (SoTA) performance compared to concurrent methods.
- Score: 6.666707251631694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The semantic segmentation of nighttime scenes is a challenging problem that
is key to impactful applications like self-driving cars. Yet, it has received
little attention compared to its daytime counterpart. In this paper, we propose
NightLab, a novel nighttime segmentation framework that leverages multiple deep
learning models imbued with night-aware features to yield State-of-The-Art
(SoTA) performance on multiple night segmentation benchmarks. Notably, NightLab
contains models at two levels of granularity, i.e. image and regional, and each
level is composed of light adaptation and segmentation modules. Given a
nighttime image, the image level model provides an initial segmentation
estimate while, in parallel, a hardness detection module identifies regions and
their surrounding context that need further analysis. A regional level model
focuses on these difficult regions to provide a significantly improved
segmentation. All the models in NightLab are trained end-to-end using a set of
proposed night-aware losses without handcrafted heuristics. Extensive
experiments on the NightCity and BDD100K datasets show NightLab achieves SoTA
performance compared to concurrent methods.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - Disentangled Contrastive Image Translation for Nighttime Surveillance [87.03178320662592]
Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations.
Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence.
We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day.
This paper contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance.
arXiv Detail & Related papers (2023-07-11T06:40:27Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime
Semantic Segmentation [18.43890050736093]
We propose a novel domain adaptation network (DANNet) for nighttime semantic segmentation.
It employs an adversarial training with a labeled daytime dataset and an unlabeled dataset that contains coarsely aligned day-night image pairs.
Our method achieves state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2021-04-22T02:49:28Z) - Night-time Scene Parsing with a Large Real Dataset [67.11211537439152]
We aim to address the night-time scene parsing (NTSP) problem, which has two main challenges.
To tackle the scarcity of night-time data, we collect a novel labeled dataset, named it NightCity, of 4,297 real night-time images.
We also propose an exposure-aware framework to address the NTSP problem through augmenting the segmentation process with explicitly learned exposure features.
arXiv Detail & Related papers (2020-03-15T18:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.