Robust Monocular Depth Estimation under Challenging Conditions
- URL: http://arxiv.org/abs/2308.09711v1
- Date: Fri, 18 Aug 2023 17:59:01 GMT
- Title: Robust Monocular Depth Estimation under Challenging Conditions
- Authors: Stefano Gasperini, Nils Morbitzer, HyunJun Jung, Nassir Navab,
Federico Tombari
- Abstract summary: State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
- Score: 81.57697198031975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While state-of-the-art monocular depth estimation approaches achieve
impressive results in ideal settings, they are highly unreliable under
challenging illumination and weather conditions, such as at nighttime or in the
presence of rain. In this paper, we uncover these safety-critical issues and
tackle them with md4all: a simple and effective solution that works reliably
under both adverse and ideal conditions, as well as for different types of
learning supervision. We achieve this by exploiting the efficacy of existing
methods under perfect settings. Therefore, we provide valid training signals
independently of what is in the input. First, we generate a set of complex
samples corresponding to the normal training ones. Then, we train the model by
guiding its self- or full-supervision by feeding the generated samples and
computing the standard losses on the corresponding original images. Doing so
enables a single model to recover information across diverse conditions without
modifications at inference time. Extensive experiments on two challenging
public datasets, namely nuScenes and Oxford RobotCar, demonstrate the
effectiveness of our techniques, outperforming prior works by a large margin in
both standard and challenging conditions. Source code and data are available
at: https://md4all.github.io.
Related papers
- Efficient Imitation Learning with Conservative World Models [54.52140201148341]
We tackle the problem of policy learning from expert demonstrations without a reward function.
We re-frame imitation learning as a fine-tuning problem, rather than a pure reinforcement learning one.
arXiv Detail & Related papers (2024-05-21T20:53:18Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Stealing Stable Diffusion Prior for Robust Monocular Depth Estimation [33.140210057065644]
This paper introduces a novel approach named Stealing Stable Diffusion (SSD) prior for robust monocular depth estimation.
The approach addresses this limitation by utilizing stable diffusion to generate synthetic images that mimic challenging conditions.
The effectiveness of the approach is evaluated on nuScenes and Oxford RobotCar, two challenging public datasets.
arXiv Detail & Related papers (2024-03-08T05:06:31Z) - Learning Real-World Image De-Weathering with Imperfect Supervision [57.748585821252824]
Existing real-world de-weathering datasets often exhibit inconsistent illumination, position, and textures between the ground-truth images and the input degraded images.
We develop a Consistent Label Constructor (CLC) to generate a pseudo-label as consistent as possible with the input degraded image.
We combine the original imperfect labels and pseudo-labels to jointly supervise the de-weathering model by the proposed Information Allocation Strategy.
arXiv Detail & Related papers (2023-10-23T14:02:57Z) - WeatherDepth: Curriculum Contrastive Learning for Self-Supervised Depth Estimation under Adverse Weather Conditions [42.99525455786019]
We propose WeatherDepth, a self-supervised robust depth estimation model with curriculum contrastive learning.
The proposed solution is proven to be easily incorporated into various architectures and demonstrates state-of-the-art (SoTA) performance on both synthetic and real weather datasets.
arXiv Detail & Related papers (2023-10-09T09:26:27Z) - Towards a robust and reliable deep learning approach for detection of
compact binary mergers in gravitational wave data [0.0]
We develop a deep learning model stage-wise and work towards improving its robustness and reliability.
We retrain the model in a novel framework involving a generative adversarial network (GAN)
Although absolute robustness is practically impossible to achieve, we demonstrate some fundamental improvements earned through such training.
arXiv Detail & Related papers (2023-06-20T18:00:05Z) - Rethinking Real-world Image Deraining via An Unpaired Degradation-Conditioned Diffusion Model [51.49854435403139]
We propose RainDiff, the first real-world image deraining paradigm based on diffusion models.
We introduce a stable and non-adversarial unpaired cycle-consistent architecture that can be trained, end-to-end, with only unpaired data for supervision.
We also propose a degradation-conditioned diffusion model that refines the desired output via a diffusive generative process conditioned by learned priors of multiple rain degradations.
arXiv Detail & Related papers (2023-01-23T13:34:01Z) - An Efficient Domain-Incremental Learning Approach to Drive in All
Weather Conditions [8.436505917796174]
Deep neural networks enable impressive visual perception performance for autonomous driving.
They are prone to forgetting previously learned information when adapting to different weather conditions.
We propose DISC -- Domain Incremental through Statistical Correction -- a simple zero-forgetting approach which can incrementally learn new tasks.
arXiv Detail & Related papers (2022-04-19T11:39:20Z) - Parameter-free Online Test-time Adaptation [19.279048049267388]
We show how test-time adaptation methods fare for a number of pre-trained models on a variety of real-world scenarios.
We propose a particularly "conservative" approach, which addresses the problem with a Laplacian Adjusted Maximum Estimation (LAME)
Our approach exhibits a much higher average accuracy across scenarios than existing methods, while being notably faster and have a much lower memory footprint.
arXiv Detail & Related papers (2022-01-15T00:29:16Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.