RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions
- URL: http://arxiv.org/abs/2310.15171v1
- Date: Mon, 23 Oct 2023 17:59:59 GMT
- Title: RoboDepth: Robust Out-of-Distribution Depth Estimation under Corruptions
- Authors: Lingdong Kong and Shaoyuan Xie and Hanjiang Hu and Lai Xing Ng and
Benoit R. Cottereau and Wei Tsang Ooi
- Abstract summary: We introduce a comprehensive robustness test suite, RoboDepth, spanning 18 corruptions spanning three categories.
We benchmark 42 depth estimation models across indoor and outdoor scenes to assess their resilience to these corruptions.
Our findings underscore that, in the absence of a dedicated robustness evaluation framework, many leading depth estimation models may be susceptible to typical corruptions.
- Score: 7.359657743276515
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Depth estimation from monocular images is pivotal for real-world visual
perception systems. While current learning-based depth estimation models train
and test on meticulously curated data, they often overlook out-of-distribution
(OoD) situations. Yet, in practical settings -- especially safety-critical ones
like autonomous driving -- common corruptions can arise. Addressing this
oversight, we introduce a comprehensive robustness test suite, RoboDepth,
encompassing 18 corruptions spanning three categories: i) weather and lighting
conditions; ii) sensor failures and movement; and iii) data processing
anomalies. We subsequently benchmark 42 depth estimation models across indoor
and outdoor scenes to assess their resilience to these corruptions. Our
findings underscore that, in the absence of a dedicated robustness evaluation
framework, many leading depth estimation models may be susceptible to typical
corruptions. We delve into design considerations for crafting more robust depth
estimation models, touching upon pre-training, augmentation, modality, model
capacity, and learning paradigms. We anticipate our benchmark will establish a
foundational platform for advancing robust OoD depth estimation.
Related papers
- Structure-Centric Robust Monocular Depth Estimation via Knowledge Distillation [9.032563775151074]
Monocular depth estimation is a key technique for 3D perception in computer vision.
It faces significant challenges in real-world scenarios, which encompass adverse weather variations, motion blur, as well as scenes with poor lighting conditions at night.
We devise a novel approach to reduce over-reliance on local textures, enhancing robustness against missing or interfering patterns.
arXiv Detail & Related papers (2024-10-09T15:20:29Z) - PoseBench: Benchmarking the Robustness of Pose Estimation Models under Corruptions [57.871692507044344]
Pose estimation aims to accurately identify anatomical keypoints in humans and animals using monocular images.
Current models are typically trained and tested on clean data, potentially overlooking the corruption during real-world deployment.
We introduce PoseBench, a benchmark designed to evaluate the robustness of pose estimation models against real-world corruption.
arXiv Detail & Related papers (2024-06-20T14:40:17Z) - Benchmarking and Improving Bird's Eye View Perception Robustness in Autonomous Driving [55.93813178692077]
We present RoboBEV, an extensive benchmark suite designed to evaluate the resilience of BEV algorithms.
We assess 33 state-of-the-art BEV-based perception models spanning tasks like detection, map segmentation, depth estimation, and occupancy prediction.
Our experimental results also underline the efficacy of strategies like pre-training and depth-free BEV transformations in enhancing robustness against out-of-distribution data.
arXiv Detail & Related papers (2024-05-27T17:59:39Z) - Calib3D: Calibrating Model Preferences for Reliable 3D Scene Understanding [55.32861154245772]
Calib3D is a pioneering effort to benchmark and scrutinize the reliability of 3D scene understanding models.
We evaluate 28 state-of-the-art models across 10 diverse 3D datasets.
We introduce DeptS, a novel depth-aware scaling approach aimed at enhancing 3D model calibration.
arXiv Detail & Related papers (2024-03-25T17:59:59Z) - The RoboDepth Challenge: Methods and Advancements Towards Robust Depth Estimation [97.63185634482552]
We summarize the winning solutions from the RoboDepth Challenge.
The challenge was designed to facilitate and advance robust OoD depth estimation.
We hope this challenge could lay a solid foundation for future research on robust and reliable depth estimation.
arXiv Detail & Related papers (2023-07-27T17:59:56Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Benchmarking Robustness of 3D Object Detection to Common Corruptions in
Autonomous Driving [44.753797839280516]
Existing 3D detectors lack robustness to real-world corruptions caused by adverse weathers, sensor noises, etc.
We benchmark 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios.
We conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their robustness.
arXiv Detail & Related papers (2023-03-20T11:45:54Z) - SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes [58.89295356901823]
Self-supervised monocular depth estimation has shown impressive results in static scenes.
It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions.
We introduce an external pretrained monocular depth estimation model for generating single-image depth prior.
Our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes.
arXiv Detail & Related papers (2022-11-07T16:17:47Z) - Variational Monocular Depth Estimation for Reliability Prediction [12.951621755732544]
Self-supervised learning for monocular depth estimation is widely investigated as an alternative to supervised learning approach.
Previous works have successfully improved the accuracy of depth estimation by modifying the model structure.
In this paper, we theoretically formulate a variational model for the monocular depth estimation to predict the reliability of the estimated depth image.
arXiv Detail & Related papers (2020-11-24T06:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.