Rethinking Nighttime Image Deraining via Learnable Color Space Transformation
- URL: http://arxiv.org/abs/2510.17440v1
- Date: Mon, 20 Oct 2025 11:28:43 GMT
- Title: Rethinking Nighttime Image Deraining via Learnable Color Space Transformation
- Authors: Qiyuan Guan, Xiang Chen, Guiyue Jin, Jiyu Jin, Shumin Fan, Tianyu Song, Jinshan Pan,
- Abstract summary: We develop a new high-quality benchmark, HQ-NightRain, which offers higher harmony and realism compared to existing datasets.<n>We also develop an effective Color Space Transformation Network (CST-Net) for better removing complex rain from nighttime scenes.
- Score: 38.0322908418521
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to daytime image deraining, nighttime image deraining poses significant challenges due to inherent complexities of nighttime scenarios and the lack of high-quality datasets that accurately represent the coupling effect between rain and illumination. In this paper, we rethink the task of nighttime image deraining and contribute a new high-quality benchmark, HQ-NightRain, which offers higher harmony and realism compared to existing datasets. In addition, we develop an effective Color Space Transformation Network (CST-Net) for better removing complex rain from nighttime scenes. Specifically, we propose a learnable color space converter (CSC) to better facilitate rain removal in the Y channel, as nighttime rain is more pronounced in the Y channel compared to the RGB color space. To capture illumination information for guiding nighttime deraining, implicit illumination guidance is introduced enabling the learned features to improve the model's robustness in complex scenarios. Extensive experiments show the value of our dataset and the effectiveness of our method. The source code and datasets are available at https://github.com/guanqiyuan/CST-Net.
Related papers
- NDLPNet: A Location-Aware Nighttime Deraining Network and a Real-World Benchmark Dataset [8.582528726118023]
Rain streak artifacts hamper the performance of nighttime surveillance and autonomous navigation.<n>We propose a novel Nighttime Deraining Location-enhanced Perceptual Network (NDLPNet)<n>NDLPNet captures the spatial positional information and density distribution of rain streaks in low-light environments.
arXiv Detail & Related papers (2025-09-17T07:24:47Z) - The Devil is in the Darkness: Diffusion-Based Nighttime Dehazing Anchored in Brightness Perception [58.895000127068194]
We introduce the Diffusion-Based Nighttime Dehazing framework, which excels in both data synthesis and lighting reconstruction.<n>We propose a restoration model that integrates a pre-trained diffusion model guided by a brightness perception network.<n>Experiments validate our dataset's utility and the model's superior performance in joint haze removal and brightness mapping.
arXiv Detail & Related papers (2025-06-03T03:21:13Z) - RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Dual Degradation Representation for Joint Deraining and Low-Light Enhancement in the Dark [57.85378202032541]
Rain in the dark poses a significant challenge to deploying real-world applications such as autonomous driving, surveillance systems, and night photography.
Existing low-light enhancement or deraining methods struggle to brighten low-light conditions and remove rain simultaneously.
We introduce an end-to-end model called L$2$RIRNet, designed to manage both low-light enhancement and deraining in real-world settings.
arXiv Detail & Related papers (2023-05-06T10:17:42Z) - GTAV-NightRain: Photometric Realistic Large-scale Dataset for Night-time
Rain Streak Removal [30.93624632770902]
Rain is transparent, which reflects and refracts light in the scene to the camera.
In existing rain streak removal datasets, although density, scale, direction and intensity have been considered, transparency is not fully taken into account.
This paper proposes GTAV-NightRain dataset, which is a large-scale synthetic night-time rain streak removal dataset.
arXiv Detail & Related papers (2022-10-10T14:08:09Z) - Conditional Variational Image Deraining [158.76814157115223]
Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
arXiv Detail & Related papers (2020-04-23T11:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.