The Fourth Monocular Depth Estimation Challenge
- URL: http://arxiv.org/abs/2504.17787v1
- Date: Thu, 24 Apr 2025 17:59:52 GMT
- Title: The Fourth Monocular Depth Estimation Challenge
- Authors: Anton Obukhov, Matteo Poggi, Fabio Tosi, Ripudaman Singh Arora, Jaime Spencer, Chris Russell, Simon Hadfield, Richard Bowden, Shuaihang Wang, Zhenxin Ma, Weijie Chen, Baobei Xu, Fengyu Sun, Di Xie, Jiang Zhu, Mykola Lavreniuk, Haining Guan, Qun Wu, Yupei Zeng, Chao Lu, Huanran Wang, Guangyuan Zhou, Haotian Zhang, Jianxiong Wang, Qiang Rao, Chunjie Wang, Xiao Liu, Zhiqiang Lou, Hualie Jiang, Yihao Chen, Rui Xu, Minglang Tan, Zihan Qin, Yifan Mao, Jiayang Liu, Jialei Xu, Yifan Yang, Wenbo Zhao, Junjun Jiang, Xianming Liu, Mingshuai Zhao, Anlong Ming, Wu Chen, Feng Xue, Mengying Yu, Shida Gao, Xiangfeng Wang, Gbenga Omotara, Ramy Farag, Jacket Demby, Seyed Mohamad Ali Tousi, Guilherme N DeSouza, Tuan-Anh Yang, Minh-Quang Nguyen, Thien-Phuc Tran, Albert Luginov, Muhammad Shahzad,
- Abstract summary: This paper presents the results of the fourth edition of the Monocular Depth Estimation Challenge (MDEC)<n>It focuses on zero-shot generalization to the SYNS-Patches benchmark, a dataset featuring challenging environments in both natural and indoor settings.<n>The challenge received a total of 24 submissions that outperformed the baselines on the test set.<n>The challenge winners improved the 3D F-Score over the previous edition's best result, raising it from 22.58% to 23.05%.
- Score: 100.38910331027051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the results of the fourth edition of the Monocular Depth Estimation Challenge (MDEC), which focuses on zero-shot generalization to the SYNS-Patches benchmark, a dataset featuring challenging environments in both natural and indoor settings. In this edition, we revised the evaluation protocol to use least-squares alignment with two degrees of freedom to support disparity and affine-invariant predictions. We also revised the baselines and included popular off-the-shelf methods: Depth Anything v2 and Marigold. The challenge received a total of 24 submissions that outperformed the baselines on the test set; 10 of these included a report describing their approach, with most leading methods relying on affine-invariant predictions. The challenge winners improved the 3D F-Score over the previous edition's best result, raising it from 22.58% to 23.05%.
Related papers
- AIM 2024 Sparse Neural Rendering Challenge: Methods and Results [64.19942455360068]
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations.
Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric.
arXiv Detail & Related papers (2024-09-23T14:17:40Z) - The Third Monocular Depth Estimation Challenge [134.16634233789776]
This paper discusses the results of the third edition of the Monocular Depth Estimation Challenge (MDEC)
The challenge focuses on zero-shot generalization to the challenging SYNS-Patches dataset, featuring complex scenes in natural and indoor settings.
The challenge winners drastically improved 3D F-Score performance, from 17.51% to 23.72%.
arXiv Detail & Related papers (2024-04-25T17:59:59Z) - Semi-Supervised Unconstrained Head Pose Estimation in the Wild [60.08319512840091]
We propose the first semi-supervised unconstrained head pose estimation method SemiUHPE.
Our method is based on the observation that the aspect-ratio invariant cropping of wild heads is superior to previous landmark-based affine alignment.
Our proposed method is also beneficial for solving other closely related problems, including generic object rotation regression and 3D head reconstruction.
arXiv Detail & Related papers (2024-04-03T08:01:00Z) - The Second Monocular Depth Estimation Challenge [93.1678025923996]
The second edition of the Monocular Depth Estimation Challenge (MDEC) was open to methods using any form of supervision.
The challenge was based around the SYNS-Patches dataset, which features a wide diversity of environments with high-quality dense ground-truth.
The top supervised submission improved relative F-Score by 27.62%, while the top self-supervised improved it by 16.61%.
arXiv Detail & Related papers (2023-04-14T11:10:07Z) - The Monocular Depth Estimation Challenge [74.0535474077928]
This paper summarizes the results of the first Monocular Depth Estimation Challenge (MDEC) organized at WACV2103.
The challenge evaluated the progress of self-supervised monocular depth estimation on the challenging SYNS-Patches dataset.
arXiv Detail & Related papers (2022-11-22T11:04:15Z) - 5th Place Solution for VSPW 2021 Challenge [29.246666942808673]
In this article, we introduce the solution we used in the VSPW 2021 Challenge.
Our experiments are based on two baseline models, Swin Transformer and MaskFormer.
Without using any external segmentation dataset, our solution ranked the 5th place in the private leaderboard.
arXiv Detail & Related papers (2021-12-13T02:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.