High-Resolution Depth Maps Based on TOF-Stereo Fusion
- URL: http://arxiv.org/abs/2107.14688v1
- Date: Fri, 30 Jul 2021 15:11:42 GMT
- Title: High-Resolution Depth Maps Based on TOF-Stereo Fusion
- Authors: Vineet Gandhi, Jan Cech and Radu Horaud
- Abstract summary: We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm.
We show that the proposed algorithm outperforms 2D image-based stereo algorithms.
The algorithm potentially exhibits real-time performance on a single CPU.
- Score: 27.10059147107254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The combination of range sensors with color cameras can be very useful for
robot navigation, semantic perception, manipulation, and telepresence. Several
methods of combining range- and color-data have been investigated and
successfully used in various robotic applications. Most of these systems suffer
from the problems of noise in the range-data and resolution mismatch between
the range sensor and the color cameras, since the resolution of current range
sensors is much less than the resolution of color cameras. High-resolution
depth maps can be obtained using stereo matching, but this often fails to
construct accurate depth maps of weakly/repetitively textured scenes, or if the
scene exhibits complex self-occlusions. Range sensors provide coarse depth
information regardless of presence/absence of texture. The use of a calibrated
system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera
pair, allows data fusion thus overcoming the weaknesses of both individual
sensors. We propose a novel TOF-stereo fusion method based on an efficient
seed-growing algorithm which uses the TOF data projected onto the stereo image
pair as an initial set of correspondences. These initial "seeds" are then
propagated based on a Bayesian model which combines an image similarity score
with rough depth priors computed from the low-resolution range data. The
overall result is a dense and accurate depth map at the resolution of the color
cameras at hand. We show that the proposed algorithm outperforms 2D image-based
stereo algorithms and that the results are of higher resolution than
off-the-shelf color-range sensors, e.g., Kinect. Moreover, the algorithm
potentially exhibits real-time performance on a single CPU.
Related papers
- SelfReDepth: Self-Supervised Real-Time Depth Restoration for Consumer-Grade Sensors [42.48726526726542]
SelfReDepth is a self-supervised deep learning technique for depth restoration.
It uses multiple sequential depth frames and color data to achieve high-quality depth videos with temporal coherence.
Our results demonstrate our approach's real-time performance on real-world datasets.
arXiv Detail & Related papers (2024-06-05T15:38:02Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - FloatingFusion: Depth from ToF and Image-stabilized Stereo Cameras [37.812681878193914]
smartphones now have multimodal camera systems with time-of-flight (ToF) depth sensors and multiple color cameras.
producing accurate high-resolution depth is still challenging due to the low resolution and limited active illumination power of ToF sensors.
We propose an automatic calibration technique based on dense 2D/3D matching that can estimate camera parameters from a single snapshot.
arXiv Detail & Related papers (2022-10-06T09:57:09Z) - Multi-Camera Collaborative Depth Prediction via Consistent Structure
Estimation [75.99435808648784]
We propose a novel multi-camera collaborative depth prediction method.
It does not require large overlapping areas while maintaining structure consistency between cameras.
Experimental results on DDAD and NuScenes datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2022-10-05T03:44:34Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Robust and accurate depth estimation by fusing LiDAR and Stereo [8.85338187686374]
We propose a precision and robust method for fusing the LiDAR and stereo cameras.
This method fully combines the advantages of the LiDAR and stereo camera.
We evaluate the proposed pipeline on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-13T11:55:15Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Object Disparity [0.0]
This paper proposes a different approach for solving a 3D object distance detection by detecting object disparity directly without going through a dense pixel disparity.
An example squeezenet Object Disparity-SSD was constructed to demonstrate an efficient object disparity detection with comparable accuracy compared with Kitti dataset pixel disparity ground truth.
arXiv Detail & Related papers (2021-08-18T02:11:28Z) - PDC: Piecewise Depth Completion utilizing Superpixels [0.0]
Current approaches often rely on CNN-based methods with several known drawbacks.
We propose our novel Piecewise Depth Completion (PDC), which works completely without deep learning.
In our evaluation, we can show both the influence of the individual proposed processing steps and the overall performance of our method on the challenging KITTI dataset.
arXiv Detail & Related papers (2021-07-14T13:58:39Z) - Exploiting Raw Images for Real-Scene Super-Resolution [105.18021110372133]
We study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images.
We propose a method to generate more realistic training data by mimicking the imaging process of digital cameras.
We also develop a two-branch convolutional neural network to exploit the radiance information originally-recorded in raw images.
arXiv Detail & Related papers (2021-02-02T16:10:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.