Improving Nighttime Driving-Scene Segmentation via Dual Image-adaptive
Learnable Filters
- URL: http://arxiv.org/abs/2207.01331v2
- Date: Mon, 20 Mar 2023 07:50:13 GMT
- Title: Improving Nighttime Driving-Scene Segmentation via Dual Image-adaptive
Learnable Filters
- Authors: Wenyu Liu, Wentong Li, Jianke Zhu, Miaomiao Cui, Xuansong Xie, Lei
Zhang
- Abstract summary: We present an add-on module called dual image-adaptive learnable filters (DIAL-Filters) to improve the semantic segmentation in nighttime driving conditions.
DIAL-Filters consist of two parts, including an image-adaptive processing module (IAPM) and a learnable guided filter (LGF)
- Score: 27.960081476653023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic segmentation on driving-scene images is vital for autonomous
driving. Although encouraging performance has been achieved on daytime images,
the performance on nighttime images are less satisfactory due to the
insufficient exposure and the lack of labeled data. To address these issues, we
present an add-on module called dual image-adaptive learnable filters
(DIAL-Filters) to improve the semantic segmentation in nighttime driving
conditions, aiming at exploiting the intrinsic features of driving-scene images
under different illuminations. DIAL-Filters consist of two parts, including an
image-adaptive processing module (IAPM) and a learnable guided filter (LGF).
With DIAL-Filters, we design both unsupervised and supervised frameworks for
nighttime driving-scene segmentation, which can be trained in an end-to-end
manner. Specifically, the IAPM module consists of a small convolutional neural
network with a set of differentiable image filters, where each image can be
adaptively enhanced for better segmentation with respect to the different
illuminations. The LGF is employed to enhance the output of segmentation
network to get the final segmentation result. The DIAL-Filters are light-weight
and efficient and they can be readily applied for both daytime and nighttime
images. Our experiments show that DAIL-Filters can significantly improve the
supervised segmentation performance on ACDC_Night and NightCity datasets, while
it demonstrates the state-of-the-art performance on unsupervised nighttime
semantic segmentation on Dark Zurich and Nighttime Driving testbeds.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - Skip-Attention: Improving Vision Transformers by Paying Less Attention [55.47058516775423]
Vision computation transformers (ViTs) use expensive self-attention operations in every layer.
We propose SkipAt, a method to reuse self-attention from preceding layers to approximate attention at one or more subsequent layers.
We show the effectiveness of our method in image classification and self-supervised learning on ImageNet-1K, semantic segmentation on ADE20K, image denoising on SIDD, and video denoising on DAVIS.
arXiv Detail & Related papers (2023-01-05T18:59:52Z) - Boosting Night-time Scene Parsing with Learnable Frequency [53.05778451012621]
Night-Time Scene Parsing (NTSP) is essential to many vision applications, especially for autonomous driving.
Most of the existing methods are proposed for day-time scene parsing.
We show that our method performs favorably against the state-of-the-art methods on the NightCity, NightCity+ and BDD100K-night datasets.
arXiv Detail & Related papers (2022-08-30T13:09:59Z) - GPS-GLASS: Learning Nighttime Semantic Segmentation Using Daytime Video
and GPS data [15.430918080412518]
Nighttime semantic segmentation is especially challenging due to a lack of annotated nighttime images.
We propose a novel GPS-based training framework for nighttime semantic segmentation.
Experimental results demonstrate the effectiveness of the proposed method on several nighttime semantic segmentation datasets.
arXiv Detail & Related papers (2022-07-27T05:05:04Z) - Illumination Adaptive Transformer [66.50045722358503]
We propose a lightweight fast Illumination Adaptive Transformer (IAT)
IAT decomposes the light transformation pipeline into local and global ISP components.
We have extensively evaluated IAT on multiple real-world datasets.
arXiv Detail & Related papers (2022-05-30T06:21:52Z) - NightLab: A Dual-level Architecture with Hardness Detection for
Segmentation at Night [6.666707251631694]
We propose NightLab, a novel nighttime segmentation framework.
Models at two levels of granularity, i.e. image and regional, and each level is composed of light adaptation and segmentation modules.
Experiments on the NightCity and BDD100K datasets show NightLab achieves State-of-The-Art (SoTA) performance compared to concurrent methods.
arXiv Detail & Related papers (2022-04-12T05:50:22Z) - Retinal Vessel Segmentation with Pixel-wise Adaptive Filters [47.8629995041574]
We propose two novel methods to address the challenges of retinal vessel segmentation.
First, we devise a light-weight module, named multi-scale residual similarity gathering (MRSG), to generate pixel-wise adaptive filters (PA-Filters)
Second, we introduce a response cue erasing (RCE) strategy to enhance the segmentation accuracy.
arXiv Detail & Related papers (2022-02-03T14:40:36Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - DANNet: A One-Stage Domain Adaptation Network for Unsupervised Nighttime
Semantic Segmentation [18.43890050736093]
We propose a novel domain adaptation network (DANNet) for nighttime semantic segmentation.
It employs an adversarial training with a labeled daytime dataset and an unlabeled dataset that contains coarsely aligned day-night image pairs.
Our method achieves state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2021-04-22T02:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.