MIPI 2023 Challenge on RGB+ToF Depth Completion: Methods and Results
- URL: http://arxiv.org/abs/2304.13916v1
- Date: Thu, 27 Apr 2023 02:00:04 GMT
- Title: MIPI 2023 Challenge on RGB+ToF Depth Completion: Methods and Results
- Authors: Qingpeng Zhu, Wenxiu Sun, Yuekun Dai, Chongyi Li, Shangchen Zhou,
Ruicheng Feng, Qianhui Sun, Chen Change Loy, Jinwei Gu, Yi Yu, Yangke Huang,
Kang Zhang, Meiya Chen, Yu Wang, Yongchao Li, Hao Jiang, Amrit Kumar Muduli,
Vikash Kumar, Kunal Swami, Pankaj Kumar Bajpai, Yunchao Ma, Jiajun Xiao, Zhi
Ling
- Abstract summary: Deep learning has enabled more accurate and efficient completion of depth maps from RGB images and sparse ToF measurements.
To evaluate the performance of different depth completion methods, we organized an RGB+sparse ToF depth completion competition.
In this report, we present the results of the competition and analyze the strengths and weaknesses of the top-performing methods.
- Score: 76.77266693620425
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Depth completion from RGB images and sparse Time-of-Flight (ToF) measurements
is an important problem in computer vision and robotics. While traditional
methods for depth completion have relied on stereo vision or structured light
techniques, recent advances in deep learning have enabled more accurate and
efficient completion of depth maps from RGB images and sparse ToF measurements.
To evaluate the performance of different depth completion methods, we organized
an RGB+sparse ToF depth completion competition. The competition aimed to
encourage research in this area by providing a standardized dataset and
evaluation metrics to compare the accuracy of different approaches. In this
report, we present the results of the competition and analyze the strengths and
weaknesses of the top-performing methods. We also discuss the implications of
our findings for future research in RGB+sparse ToF depth completion. We hope
that this competition and report will help to advance the state-of-the-art in
this important area of research. More details of this challenge and the link to
the dataset can be found at https://mipi-challenge.org/MIPI2023.
Related papers
- RGB Guided ToF Imaging System: A Survey of Deep Learning-based Methods [30.34690112905212]
Integrating an RGB camera into a ToF imaging system has become a significant technique for perceiving the real world.
This paper comprehensively reviews the works related to RGB guided ToF imaging, including network structures, learning strategies, evaluation metrics, benchmark datasets, and objective functions.
arXiv Detail & Related papers (2024-05-16T17:59:58Z) - Depth-Relative Self Attention for Monocular Depth Estimation [23.174459018407003]
deep neural networks rely on various visual hints such as size, shade, and texture extracted from RGB information.
We propose a novel depth estimation model named RElative Depth Transformer (RED-T) that uses relative depth as guidance in self-attention.
We show that the proposed model achieves competitive results in monocular depth estimation benchmarks and is less biased to RGB information.
arXiv Detail & Related papers (2023-04-25T14:20:31Z) - MIPI 2022 Challenge on RGB+ToF Depth Completion: Dataset and Report [92.61915017739895]
This paper introduces the first MIPI challenge including five tracks focusing on novel image sensors and imaging algorithms.
The participants were provided with a new dataset called TetrasRGBD, which contains 18k pairs of high-quality synthetic RGB+Depth training data and 2.3k pairs of testing data from mixed sources.
The final results are evaluated using objective metrics and Mean Opinion Score (MOS) subjectively.
arXiv Detail & Related papers (2022-09-15T05:31:53Z) - Learning an Efficient Multimodal Depth Completion Model [11.740546882538142]
RGB image-guided sparse depth completion has attracted extensive attention recently, but still faces some problems.
The proposed method can outperform some state-of-the-art methods with a lightweight architecture.
The method also wins the championship in the MIPI2022 RGB+TOF depth completion challenge.
arXiv Detail & Related papers (2022-08-23T07:03:14Z) - Pyramidal Attention for Saliency Detection [30.554118525502115]
This paper exploits only RGB images, estimates depth from RGB, and leverages the intermediate depth features.
We employ a pyramidal attention structure to extract multi-level convolutional-transformer features to process initial stage representations.
We report significantly improved performance against 21 and 40 state-of-the-art SOD methods on eight RGB and RGB-D datasets.
arXiv Detail & Related papers (2022-04-14T06:57:46Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Is Depth Really Necessary for Salient Object Detection? [50.10888549190576]
We make the first attempt in realizing an unified depth-aware framework with only RGB information as input for inference.
Not only surpasses the state-of-the-art performances on five public RGB SOD benchmarks, but also surpasses the RGBD-based methods on five benchmarks by a large margin.
arXiv Detail & Related papers (2020-05-30T13:40:03Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.