ZeroScatter: Domain Transfer for Long Distance Imaging and Vision
through Scattering Media
- URL: http://arxiv.org/abs/2102.05847v1
- Date: Thu, 11 Feb 2021 04:41:17 GMT
- Title: ZeroScatter: Domain Transfer for Long Distance Imaging and Vision
through Scattering Media
- Authors: Zheng Shi, Ethan Tseng, Mario Bijelic, Werner Ritter, Felix Heide
- Abstract summary: We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes.
We assess the proposed method using real-world captures, and the proposed method outperforms existing monocular de-scattering approaches by 2.8 dB PSNR on controlled fog chamber measurements.
- Score: 26.401067775059154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adverse weather conditions, including snow, rain, and fog pose a challenge
for both human and computer vision in outdoor scenarios. Handling these
environmental conditions is essential for safe decision making, especially in
autonomous vehicles, robotics, and drones. Most of today's supervised imaging
and vision approaches, however, rely on training data collected in the real
world that is biased towards good weather conditions, with dense fog, snow, and
heavy rain as outliers in these datasets. Without training data, let alone
paired data, existing autonomous vehicles often limit themselves to good
conditions and stop when dense fog or snow is detected. In this work, we tackle
the lack of supervised training data by combining synthetic and indirect
supervision. We present ZeroScatter, a domain transfer method for converting
RGB-only captures taken in adverse weather into clear daytime scenes.
ZeroScatter exploits model-based, temporal, multi-view, multi-modal, and
adversarial cues in a joint fashion, allowing us to train on unpaired, biased
data. We assess the proposed method using real-world captures, and the proposed
method outperforms existing monocular de-scattering approaches by 2.8 dB PSNR
on controlled fog chamber measurements.
Related papers
- Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - ContextualFusion: Context-Based Multi-Sensor Fusion for 3D Object Detection in Adverse Operating Conditions [1.7537812081430004]
We propose a technique called ContextualFusion to incorporate the domain knowledge about cameras and lidars behaving differently across lighting and weather variations into 3D object detection models.
Our approach yields an mAP improvement of 6.2% over state-of-the-art methods on our context-balanced synthetic dataset.
Our method enhances state-of-the-art 3D objection performance at night on the real-world NuScenes dataset with a significant mAP improvement of 11.7%.
arXiv Detail & Related papers (2024-04-23T06:37:54Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain
Adaptation [152.60469768559878]
SHIFT is the largest multi-task synthetic dataset for autonomous driving.
It presents discrete and continuous shifts in cloudiness, rain and fog intensity, time of day, and vehicle and pedestrian density.
Our dataset and benchmark toolkit are publicly available at www.vis.xyz/shift.
arXiv Detail & Related papers (2022-06-16T17:59:52Z) - Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing [70.16043883381677]
In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
arXiv Detail & Related papers (2022-01-10T10:02:40Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Multimodal End-to-End Learning for Autonomous Steering in Adverse Road
and Weather Conditions [0.0]
We extend the previous work on end-to-end learning for autonomous steering to operate in adverse real-life conditions with multimodal data.
We collected 28 hours of driving data in several road and weather conditions and trained convolutional neural networks to predict the car steering wheel angle.
arXiv Detail & Related papers (2020-10-28T12:38:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.