Night-time Scene Parsing with a Large Real Dataset
- URL: http://arxiv.org/abs/2003.06883v3
- Date: Fri, 1 Apr 2022 05:40:06 GMT
- Title: Night-time Scene Parsing with a Large Real Dataset
- Authors: Xin Tan and Ke Xu and Ying Cao and Yiheng Zhang and Lizhuang Ma and
Rynson W.H. Lau
- Abstract summary: We aim to address the night-time scene parsing (NTSP) problem, which has two main challenges.
To tackle the scarcity of night-time data, we collect a novel labeled dataset, named it NightCity, of 4,297 real night-time images.
We also propose an exposure-aware framework to address the NTSP problem through augmenting the segmentation process with explicitly learned exposure features.
- Score: 67.11211537439152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although huge progress has been made on scene analysis in recent years, most
existing works assume the input images to be in day-time with good lighting
conditions. In this work, we aim to address the night-time scene parsing (NTSP)
problem, which has two main challenges: 1) labeled night-time data are scarce,
and 2) over- and under-exposures may co-occur in the input night-time images
and are not explicitly modeled in existing pipelines. To tackle the scarcity of
night-time data, we collect a novel labeled dataset, named {\it NightCity}, of
4,297 real night-time images with ground truth pixel-level semantic
annotations. To our knowledge, NightCity is the largest dataset for NTSP. In
addition, we also propose an exposure-aware framework to address the NTSP
problem through augmenting the segmentation process with explicitly learned
exposure features. Extensive experiments show that training on NightCity can
significantly improve NTSP performances and that our exposure-aware model
outperforms the state-of-the-art methods, yielding top performances on our
dataset as well as existing datasets.
Related papers
- PIG: Prompt Images Guidance for Night-Time Scene Parsing [48.35991796324741]
Unsupervised domain adaptation (UDA) has become the predominant method for studying night scenes.
We propose a Night-Focused Network (NFNet) to learn night-specific features from both target domain images and prompt images.
We conduct experiments on four night-time datasets: NightCity, NightCity+, Dark Zurich, and ACDC.
arXiv Detail & Related papers (2024-06-15T07:06:19Z) - NocPlace: Nocturnal Visual Place Recognition via Generative and Inherited Knowledge Transfer [11.203135595002978]
NocPlace embeds resilience against dazzling lights and extreme darkness in the global descriptor.
NocPlace improves the performance of Eigenplaces by 7.6% on Tokyo 24/7 Night and 16.8% on SVOX Night.
arXiv Detail & Related papers (2024-02-27T02:47:09Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - Disentangled Contrastive Image Translation for Nighttime Surveillance [87.03178320662592]
Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations.
Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence.
We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day.
This paper contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance.
arXiv Detail & Related papers (2023-07-11T06:40:27Z) - STEPS: Joint Self-supervised Nighttime Image Enhancement and Depth
Estimation [12.392842482031558]
We propose a method that jointly learns a nighttime image enhancer and a depth estimator, without using ground truth for either task.
Our method tightly entangles two self-supervised tasks using a newly proposed uncertain pixel masking strategy.
We benchmark the method on two established datasets: nuScenes and RobotCar.
arXiv Detail & Related papers (2023-02-02T18:59:47Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - Day-to-Night Image Synthesis for Training Nighttime Neural ISPs [39.37467397777888]
We propose a method that synthesizes nighttime images from daytime images.
Daytime images are easy to capture, exhibit low-noise and rarely suffer from motion blur.
We show the effectiveness of our synthesis framework by training neural ISPs for nightmode rendering.
arXiv Detail & Related papers (2022-06-06T16:15:45Z) - Rendering Nighttime Image Via Cascaded Color and Brightness Compensation [22.633061635144887]
We build a high-resolution nighttime RAW-RGB dataset with white balance and tone mapping annotated by experts.
We then develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes.
Experiments show that our method has better visual quality compared to traditional ISP pipeline.
arXiv Detail & Related papers (2022-04-19T16:15:31Z) - NightLab: A Dual-level Architecture with Hardness Detection for
Segmentation at Night [6.666707251631694]
We propose NightLab, a novel nighttime segmentation framework.
Models at two levels of granularity, i.e. image and regional, and each level is composed of light adaptation and segmentation modules.
Experiments on the NightCity and BDD100K datasets show NightLab achieves State-of-The-Art (SoTA) performance compared to concurrent methods.
arXiv Detail & Related papers (2022-04-12T05:50:22Z) - Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation
for Semantic Nighttime Image Segmentation [107.33492779588641]
We develop a curriculum framework to adapt semantic segmentation models from day to night without using nighttime annotations.
We also design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images.
arXiv Detail & Related papers (2020-05-28T16:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.