Self-supervised Monocular Depth Estimation: Let's Talk About The Weather
- URL: http://arxiv.org/abs/2307.08357v1
- Date: Mon, 17 Jul 2023 09:50:03 GMT
- Title: Self-supervised Monocular Depth Estimation: Let's Talk About The Weather
- Authors: Kieran Saunders, George Vogiatzis and Luis Manso
- Abstract summary: Current, self-supervised depth estimation architectures rely on clear and sunny weather scenes to train deep neural networks.
In this paper, we put forward a method that uses augmentations to remedy this problem.
We present extensive testing to show that our method, Robust-Depth, achieves SotA performance on the KITTI dataset.
- Score: 2.836066255205732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current, self-supervised depth estimation architectures rely on clear and
sunny weather scenes to train deep neural networks. However, in many locations,
this assumption is too strong. For example in the UK (2021), 149 days consisted
of rain. For these architectures to be effective in real-world applications, we
must create models that can generalise to all weather conditions, times of the
day and image qualities. Using a combination of computer graphics and
generative models, one can augment existing sunny-weather data in a variety of
ways that simulate adverse weather effects. While it is tempting to use such
data augmentations for self-supervised depth, in the past this was shown to
degrade performance instead of improving it. In this paper, we put forward a
method that uses augmentations to remedy this problem. By exploiting the
correspondence between unaugmented and augmented data we introduce a
pseudo-supervised loss for both depth and pose estimation. This brings back
some of the benefits of supervised learning while still not requiring any
labels. We also make a series of practical recommendations which collectively
offer a reliable, efficient framework for weather-related augmentation of
self-supervised depth from monocular video. We present extensive testing to
show that our method, Robust-Depth, achieves SotA performance on the KITTI
dataset while significantly surpassing SotA on challenging, adverse condition
data such as DrivingStereo, Foggy CityScape and NuScenes-Night. The project
website can be found here https://kieran514.github.io/Robust-Depth-Project/.
Related papers
- Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather [5.383130566626935]
This paper employs a Denoising Deep Neural Network as a preprocessing step to transform adverse weather images into clear weather images.
It improves driver visualization, which is critical for safe navigation in adverse weather conditions.
arXiv Detail & Related papers (2024-07-02T18:03:52Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions [42.99525455786019]
We propose WeatherDepth, a self-supervised robust depth estimation model with curriculum contrastive learning.
The proposed solution is proven to be easily incorporated into various architectures and demonstrates state-of-the-art (SoTA) performance on both synthetic and real weather datasets.
arXiv Detail & Related papers (2023-10-09T09:26:27Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - Counting Crowds in Bad Weather [68.50690406143173]
We propose a method for robust crowd counting in adverse weather scenarios.
Our model learns effective features and adaptive queries to account for large appearance variations.
Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets.
arXiv Detail & Related papers (2023-06-02T00:00:09Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Robustness of Object Detectors in Degrading Weather Conditions [7.91378990016322]
State-of-the-art object detection systems for autonomous driving achieve promising results in clear weather conditions.
These systems need to work in degrading weather conditions, such as rain, fog and snow.
Most approaches evaluate only on the KITTI dataset, which consists only of clear weather scenes.
arXiv Detail & Related papers (2021-06-16T13:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.