Towards High-Precision Depth Sensing via Monocular-Aided iToF and RGB Integration
- URL: http://arxiv.org/abs/2508.16579v1
- Date: Sun, 03 Aug 2025 13:48:00 GMT
- Title: Towards High-Precision Depth Sensing via Monocular-Aided iToF and RGB Integration
- Authors: Yansong Du, Yutong Deng, Yuting Zhou, Feiyu Jiao, Jian Song, Xun Guan,
- Abstract summary: We present a novel iToF-RGB fusion framework designed to address the inherent limitations of indirect Time-of-Flight (iToF) depth sensing.<n>The proposed method first reprojects the narrow-FoV iToF depth map onto the wide-FoV RGB coordinate system.<n>A dual-encoder fusion network is then employed to jointly extract complementary features from the reprojected iToF depth and RGB image.<n>By integrating cross-modal structural cues and depth consistency constraints, our approach achieves enhanced depth accuracy, improved edge sharpness, and seamless FoV expansion.
- Score: 11.077863605272668
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper presents a novel iToF-RGB fusion framework designed to address the inherent limitations of indirect Time-of-Flight (iToF) depth sensing, such as low spatial resolution, limited field-of-view (FoV), and structural distortion in complex scenes. The proposed method first reprojects the narrow-FoV iToF depth map onto the wide-FoV RGB coordinate system through a precise geometric calibration and alignment module, ensuring pixel-level correspondence between modalities. A dual-encoder fusion network is then employed to jointly extract complementary features from the reprojected iToF depth and RGB image, guided by monocular depth priors to recover fine-grained structural details and perform depth super-resolution. By integrating cross-modal structural cues and depth consistency constraints, our approach achieves enhanced depth accuracy, improved edge sharpness, and seamless FoV expansion. Extensive experiments on both synthetic and real-world datasets demonstrate that the proposed framework significantly outperforms state-of-the-art methods in terms of accuracy, structural consistency, and visual quality.
Related papers
- UDPNet: Unleashing Depth-based Priors for Robust Image Dehazing [77.10640210751981]
UDPNet is a general framework that leverages depth-based priors from a large-scale pretrained depth estimation model DepthAnything V2.<n>Our proposed solution establishes a new benchmark for depth-aware dehazing across various scenarios.
arXiv Detail & Related papers (2026-01-11T13:29:02Z) - Depth-Consistent 3D Gaussian Splatting via Physical Defocus Modeling and Multi-View Geometric Supervision [12.972772139292957]
This paper proposes a novel computational framework that integrates depth-of-field supervision and multi-view consistency supervision.<n>By unifying defocus physics with multi-view geometric constraints, our method achieves superior depth fidelity, demonstrating a 0.8 dB PSNR improvement over the state-of-the-art method.
arXiv Detail & Related papers (2025-11-13T13:51:16Z) - S2ML: Spatio-Spectral Mutual Learning for Depth Completion [56.26679539288063]
Raw depth images captured by RGB-D cameras often suffer from incomplete depth values due to weak reflections, boundary shadows, and artifacts.<n>Existing methods address this problem through depth completion in the image domain, but they overlook the physical characteristics of raw depth images.<n>We propose a Spatio-Spectral Mutual Learning framework (S2ML) to harmonize the advantages of both spatial and frequency domains for depth completion.
arXiv Detail & Related papers (2025-11-08T15:01:55Z) - DuCos: Duality Constrained Depth Super-Resolution via Foundation Model [56.88399488384106]
We introduce DuCos, a novel depth super-resolution framework grounded in Lagrangian duality theory.<n>DuCos is the first to significantly improve generalization across diverse scenarios with foundation models as prompts.
arXiv Detail & Related papers (2025-03-06T07:36:45Z) - RGB-Thermal Infrared Fusion for Robust Depth Estimation in Complex Environments [0.0]
This paper proposes a novel multimodal depth estimation model, RTFusion, which enhances depth estimation accuracy and robustness.<n>The model incorporates a unique fusion mechanism, EGFusion, consisting of the Mutual Complementary Attention (MCA) module for cross-modal feature alignment.<n>Experiments on the MS2 and ViViD++ datasets demonstrate that the proposed model consistently produces high-quality depth maps.
arXiv Detail & Related papers (2025-03-05T01:35:14Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Structure Flow-Guided Network for Real Depth Super-Resolution [28.63334760296165]
We propose a novel structure flow-guided depth super-resolution (DSR) framework.
A cross-modality flow map is learned to guide the RGB-structure information transferring for precise depth upsampling.
Our framework achieves excellent performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-01-31T05:13:55Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.