Disentangle then Parse:Night-time Semantic Segmentation with
Illumination Disentanglement
- URL: http://arxiv.org/abs/2307.09362v2
- Date: Wed, 19 Jul 2023 13:21:30 GMT
- Title: Disentangle then Parse:Night-time Semantic Segmentation with
Illumination Disentanglement
- Authors: Zhixiang Wei, Lin Chen, Tao Tu, Huaian Chen, Pengyang Ling, Yi Jin
- Abstract summary: We propose a novel semantic segmentation paradigm, i.e., disentangle then parse (DTP)
DTP explicitly disentangles night-time images into light-invariant reflectance and light-specific illumination components.
We show that DTP significantly outperforms state-of-the-art methods for night-time segmentation.
- Score: 11.13045179377011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most prior semantic segmentation methods have been developed for day-time
scenes, while typically underperforming in night-time scenes due to
insufficient and complicated lighting conditions. In this work, we tackle this
challenge by proposing a novel night-time semantic segmentation paradigm, i.e.,
disentangle then parse (DTP). DTP explicitly disentangles night-time images
into light-invariant reflectance and light-specific illumination components and
then recognizes semantics based on their adaptive fusion. Concretely, the
proposed DTP comprises two key components: 1) Instead of processing
lighting-entangled features as in prior works, our Semantic-Oriented
Disentanglement (SOD) framework enables the extraction of reflectance component
without being impeded by lighting, allowing the network to consistently
recognize the semantics under cover of varying and complicated lighting
conditions. 2) Based on the observation that the illumination component can
serve as a cue for some semantically confused regions, we further introduce an
Illumination-Aware Parser (IAParser) to explicitly learn the correlation
between semantics and lighting, and aggregate the illumination features to
yield more precise predictions. Extensive experiments on the night-time
segmentation task with various settings demonstrate that DTP significantly
outperforms state-of-the-art methods. Furthermore, with negligible additional
parameters, DTP can be directly used to benefit existing day-time methods for
night-time segmentation.
Related papers
- Night-to-Day Translation via Illumination Degradation Disentanglement [51.77716565167767]
Night-to-Day translation aims to achieve day-like vision for nighttime scenes.
processing night images with complex degradations remains a significant challenge under unpaired conditions.
We propose textbfN2D3 to identify different degradation patterns in nighttime images.
arXiv Detail & Related papers (2024-11-21T08:51:32Z) - Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation [97.0195314255101]
Inverse path tracing is expensive to compute, and ambiguities exist between reflection and emission.
Our Factorized Inverse Path Tracing (FIPT) addresses these challenges by using a factored light transport formulation.
Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities.
arXiv Detail & Related papers (2023-04-12T07:46:05Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Cross-Domain Correlation Distillation for Unsupervised Domain Adaptation
in Nighttime Semantic Segmentation [17.874336775904272]
We propose a novel domain adaptation framework via cross-domain correlation distillation, called CCDistill.
We extract the content and style knowledge contained in features, calculate the degree of inherent or illumination difference between two images.
Experiments on Dark Zurich and ACDC demonstrate that CCDistill achieves the state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2022-05-02T12:42:04Z) - Bi-Mix: Bidirectional Mixing for Domain Adaptive Nighttime Semantic
Segmentation [83.97914777313136]
In autonomous driving, learning a segmentation model that can adapt to various environmental conditions is crucial.
In this paper, we study the problem of Domain Adaptive Nighttime Semantic (DANSS), which aims to learn a discriminative nighttime model.
We propose a novel Bi-Mix framework for DANSS, which can contribute to both image translation and segmentation adaptation processes.
arXiv Detail & Related papers (2021-11-19T17:39:47Z) - HeatNet: Bridging the Day-Night Domain Gap in Semantic Segmentation with
Thermal Images [26.749261270690425]
Real-world driving scenarios entail adverse environmental conditions such as nighttime illumination or glare.
We propose a multimodal semantic segmentation model that can be applied during daytime and nighttime.
Besides RGB images, we leverage thermal images, making our network significantly more robust.
arXiv Detail & Related papers (2020-03-10T11:36:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.