The Monocular Depth Estimation Challenge
- URL: http://arxiv.org/abs/2211.12174v1
- Date: Tue, 22 Nov 2022 11:04:15 GMT
- Title: The Monocular Depth Estimation Challenge
- Authors: Jaime Spencer, C. Stella Qian, Chris Russell, Simon Hadfield, Erich
Graf, Wendy Adams, Andrew J. Schofield, James Elder, Richard Bowden, Heng
Cong, Stefano Mattoccia, Matteo Poggi, Zeeshan Khan Suri, Yang Tang, Fabio
Tosi, Hao Wang, Youmin Zhang, Yusheng Zhang, Chaoqiang Zhao
- Abstract summary: This paper summarizes the results of the first Monocular Depth Estimation Challenge (MDEC) organized at WACV2103.
The challenge evaluated the progress of self-supervised monocular depth estimation on the challenging SYNS-Patches dataset.
- Score: 74.0535474077928
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper summarizes the results of the first Monocular Depth Estimation
Challenge (MDEC) organized at WACV2023. This challenge evaluated the progress
of self-supervised monocular depth estimation on the challenging SYNS-Patches
dataset. The challenge was organized on CodaLab and received submissions from 4
valid teams. Participants were provided a devkit containing updated reference
implementations for 16 State-of-the-Art algorithms and 4 novel techniques. The
threshold for acceptance for novel techniques was to outperform every one of
the 16 SotA baselines. All participants outperformed the baseline in
traditional metrics such as MAE or AbsRel. However, pointcloud reconstruction
metrics were challenging to improve upon. We found predictions were
characterized by interpolation artefacts at object boundaries and errors in
relative object positioning. We hope this challenge is a valuable contribution
to the community and encourage authors to participate in future editions.
Related papers
- AIM 2024 Sparse Neural Rendering Challenge: Methods and Results [64.19942455360068]
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations.
Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric.
arXiv Detail & Related papers (2024-09-23T14:17:40Z) - The Third Monocular Depth Estimation Challenge [134.16634233789776]
This paper discusses the results of the third edition of the Monocular Depth Estimation Challenge (MDEC)
The challenge focuses on zero-shot generalization to the challenging SYNS-Patches dataset, featuring complex scenes in natural and indoor settings.
The challenge winners drastically improved 3D F-Score performance, from 17.51% to 23.72%.
arXiv Detail & Related papers (2024-04-25T17:59:59Z) - The RoboDepth Challenge: Methods and Advancements Towards Robust Depth Estimation [97.63185634482552]
We summarize the winning solutions from the RoboDepth Challenge.
The challenge was designed to facilitate and advance robust OoD depth estimation.
We hope this challenge could lay a solid foundation for future research on robust and reliable depth estimation.
arXiv Detail & Related papers (2023-07-27T17:59:56Z) - The Second Monocular Depth Estimation Challenge [93.1678025923996]
The second edition of the Monocular Depth Estimation Challenge (MDEC) was open to methods using any form of supervision.
The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth.
The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%.
arXiv Detail & Related papers (2023-04-14T11:10:07Z) - Unsupervised Deep Persistent Monocular Visual Odometry and Depth
Estimation in Extreme Environments [7.197188771058501]
unsupervised deep learning approaches have received significant attention to estimate the depth and visual odometry (VO) from unlabelled monocular image sequences.
We propose an unsupervised monocular deep VO framework that predicts six-degrees-of-freedom pose camera motion and depth map of the scene from unlabelled RGB image sequences.
The proposed approach outperforms both traditional and state-of-the-art unsupervised deep VO methods providing better results for both pose estimation and depth recovery.
arXiv Detail & Related papers (2020-10-31T19:10:27Z) - Recognizing Families In the Wild: White Paper for the 4th Edition Data
Challenge [91.55319616114943]
This paper summarizes the supported tasks (i.e., kinship verification, tri-subject verification, and search & retrieval of missing children) in the Recognizing Families In the Wild (RFIW) evaluation.
The purpose of this paper is to describe the 2020 RFIW challenge, end-to-end, along with forecasts in promising future directions.
arXiv Detail & Related papers (2020-02-15T02:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.