NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes
- URL: http://arxiv.org/abs/2402.18172v2
- Date: Sun, 7 Apr 2024 16:43:51 GMT
- Title: NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes
- Authors: Cidan Shi, Lihuang Fang, Han Wu, Xiaoyu Xian, Yukai Shi, Liang Lin,
- Abstract summary: In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
- Score: 49.92839157944134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world environments, outdoor imaging systems are often affected by disturbances such as rain degradation. Especially, in nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of both the image quality and visibility. Particularly, in the field of autonomous driving, the visual perception ability of RGB sensors experiences a sharp decline in such harsh scenarios. Additionally, driving assistance systems suffer from reduced capabilities in capturing and discerning the surrounding environment, posing a threat to driving safety. Single-view information captured by single-modal sensors cannot comprehensively depict the entire scene. To address these challenges, we developed an image de-raining framework tailored for rainy nighttime driving scenes. It aims to remove rain artifacts, enrich scene representation, and restore useful information. Specifically, we introduce cooperative learning between visible and infrared images captured by different sensors. By cross-view fusion of these multi-source data, the scene within the images gains richer texture details and enhanced contrast. We constructed an information cleaning module called CleanNet as the first stage of our framework. Moreover, we designed an information fusion module called FusionNet as the second stage to fuse the clean visible images with infrared images. Using this stage-by-stage learning strategy, we obtain de-rained fusion images with higher quality and better visual perception. Extensive experiments demonstrate the effectiveness of our proposed Cross-View Cooperative Learning (CVCL) in adverse driving scenarios in low-light rainy environments. The proposed approach addresses the gap in the utilization of existing rain removal algorithms in specific low-light conditions.
Related papers
- LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving [45.97279394690308]
LightDiff is a framework designed to enhance the low-light image quality for autonomous driving applications.
It incorporates a novel multi-condition adapter that adaptively controls the input weights from different modalities, including depth maps, RGB images, and text captions.
It can significantly improve the performance of several state-of-the-art 3D detectors in night-time conditions while achieving high visual quality scores.
arXiv Detail & Related papers (2024-04-07T04:10:06Z) - Dual Degradation Representation for Joint Deraining and Low-Light Enhancement in the Dark [57.85378202032541]
Rain in the dark poses a significant challenge to deploying real-world applications such as autonomous driving, surveillance systems, and night photography.
Existing low-light enhancement or deraining methods struggle to brighten low-light conditions and remove rain simultaneously.
We introduce an end-to-end model called L$2$RIRNet, designed to manage both low-light enhancement and deraining in real-world settings.
arXiv Detail & Related papers (2023-05-06T10:17:42Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Asymmetric Dual-Decoder U-Net for Joint Rain and Haze Removal [21.316673824040752]
In real-life scenarios, rain and haze, two often co-occurring common weather phenomena, can greatly degrade the clarity and quality of scene images.
We propose a novel deep neural network, named Asymmetric Dual-decoder U-Net (ADU-Net), to address the aforementioned challenge.
The ADU-Net produces both the contamination residual and the scene residual to efficiently remove the rain and haze while preserving the fidelity of the scene information.
arXiv Detail & Related papers (2022-06-14T12:50:41Z) - Semi-DRDNet Semi-supervised Detail-recovery Image Deraining Network via
Unpaired Contrastive Learning [59.22620253308322]
We propose a semi-supervised detail-recovery image deraining network (termed as Semi-DRDNet)
As a semi-supervised learning paradigm, Semi-DRDNet operates smoothly on both synthetic and real-world rainy data in terms of deraining robustness and detail accuracy.
arXiv Detail & Related papers (2022-04-06T12:35:27Z) - DAWN: Vehicle Detection in Adverse Weather Nature Dataset [4.09920839425892]
We present a new dataset consisting of real-world images collected under various adverse weather conditions called DAWN.
The dataset comprises a collection of 1000 images from real-traffic environments, which are divided into four sets of weather conditions: fog, snow, rain and sandstorms.
This data helps interpreting effects caused by the adverse weather conditions on the performance of vehicle detection systems.
arXiv Detail & Related papers (2020-08-12T15:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.