Improving Panoptic Segmentation for Nighttime or Low-Illumination Urban
Driving Scenes
- URL: http://arxiv.org/abs/2306.13725v1
- Date: Fri, 23 Jun 2023 18:14:26 GMT
- Title: Improving Panoptic Segmentation for Nighttime or Low-Illumination Urban
Driving Scenes
- Authors: Ankur Chrungoo
- Abstract summary: We propose two new methods to improve the performance and robustness of Panoptic segmentation.
One of the main factors for poor results is the lack of sufficient and accurately annotated nighttime images for urban driving scenes.
The proposed approach makes use of CycleGAN to translate daytime images with existing panoptic annotations into nighttime images.
It is then utilized to retrain a Panoptic segmentation model to improve performance and robustness under poor illumination and nighttime conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous vehicles and driving systems use scene parsing as an essential
tool to understand the surrounding environment. Panoptic segmentation is a
state-of-the-art technique which proves to be pivotal in this use case. Deep
learning-based architectures have been utilized for effective and efficient
Panoptic Segmentation in recent times. However, when it comes to adverse
conditions like dark scenes with poor illumination or nighttime images,
existing methods perform poorly in comparison to daytime images. One of the
main factors for poor results is the lack of sufficient and accurately
annotated nighttime images for urban driving scenes. In this work, we propose
two new methods, first to improve the performance, and second to improve the
robustness of panoptic segmentation in nighttime or poor illumination urban
driving scenes using a domain translation approach. The proposed approach makes
use of CycleGAN (Zhu et al., 2017) to translate daytime images with existing
panoptic annotations into nighttime images, which are then utilized to retrain
a Panoptic segmentation model to improve performance and robustness under poor
illumination and nighttime conditions. In our experiments, Approach-1
demonstrates a significant improvement in the Panoptic segmentation performance
on the converted Cityscapes dataset with more than +10% PQ, +12% RQ, +2% SQ,
+14% mIoU and +10% AP50 absolute gain. Approach-2 demonstrates improved
robustness to varied nighttime driving environments. Both the approaches are
supported via comprehensive quantitative and qualitative analysis.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - STEPS: Joint Self-supervised Nighttime Image Enhancement and Depth
Estimation [12.392842482031558]
We propose a method that jointly learns a nighttime image enhancer and a depth estimator, without using ground truth for either task.
Our method tightly entangles two self-supervised tasks using a newly proposed uncertain pixel masking strategy.
We benchmark the method on two established datasets: nuScenes and RobotCar.
arXiv Detail & Related papers (2023-02-02T18:59:47Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - NightLab: A Dual-level Architecture with Hardness Detection for
Segmentation at Night [6.666707251631694]
We propose NightLab, a novel nighttime segmentation framework.
Models at two levels of granularity, i.e. image and regional, and each level is composed of light adaptation and segmentation modules.
Experiments on the NightCity and BDD100K datasets show NightLab achieves State-of-The-Art (SoTA) performance compared to concurrent methods.
arXiv Detail & Related papers (2022-04-12T05:50:22Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular
Depth Estimation in the Dark [20.66405067066299]
We introduce Priors-Based Regularization to learn distribution knowledge from unpaired depth maps.
We also leverage Mapping-Consistent Image Enhancement module to enhance image visibility and contrast.
Our framework achieves remarkable improvements and state-of-the-art results on two nighttime datasets.
arXiv Detail & Related papers (2021-08-09T06:24:35Z) - Night-time Scene Parsing with a Large Real Dataset [67.11211537439152]
We aim to address the night-time scene parsing (NTSP) problem, which has two main challenges.
To tackle the scarcity of night-time data, we collect a novel labeled dataset, named it NightCity, of 4,297 real night-time images.
We also propose an exposure-aware framework to address the NTSP problem through augmenting the segmentation process with explicitly learned exposure features.
arXiv Detail & Related papers (2020-03-15T18:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.