Matching Free Depth Recovery from Structured Light
- URL: http://arxiv.org/abs/2501.07113v1
- Date: Mon, 13 Jan 2025 08:03:49 GMT
- Title: Matching Free Depth Recovery from Structured Light
- Authors: Zhuohang Yu, Kai Wang, Juyong Zhang,
- Abstract summary: We present a novel approach for depth estimation from images captured by structured light systems.
Our approach uses a density voxel grid to represent scene geometry, which is trained via self-supervised differentiable volume rendering.
- Score: 28.865683021793625
- License:
- Abstract: We present a novel approach for depth estimation from images captured by structured light systems. Unlike many previous methods that rely on image matching process, our approach uses a density voxel grid to represent scene geometry, which is trained via self-supervised differentiable volume rendering. Our method leverages color fields derived from projected patterns in structured light systems during the rendering process, enabling the isolated optimization of the geometry field. This contributes to faster convergence and high-quality output. Additionally, we incorporate normalized device coordinates (NDC), a distortion loss, and a novel surface-based color loss to enhance geometric fidelity. Experimental results demonstrate that our method outperforms existing matching-based techniques in geometric performance for few-shot scenarios, achieving approximately a 60% reduction in average estimated depth errors on synthetic scenes and about 30% on real-world captured scenes. Furthermore, our approach delivers fast training, with a speed roughly three times faster than previous matching-free methods that employ implicit representations.
Related papers
- Fast Underwater Scene Reconstruction using Multi-View Stereo and Physical Imaging [5.676974245780037]
We propose a novel method that integrates Multi-View Stereo absorption (MVS) with a physics-based underwater image formation model.
By leveraging the medium to estimate the medium parameters and combining this with a color for rendering, we restore the true colors of underwater scenes.
Experimental results show that our method enables high-quality synthesis of novel views in scattering media, clear views restoration by removing the medium, and outperforms existing methods in rendering quality and training efficiency.
arXiv Detail & Related papers (2025-01-21T04:35:27Z) - Efficient Depth-Guided Urban View Synthesis [52.841803876653465]
We introduce Efficient Depth-Guided Urban View Synthesis (EDUS) for fast feed-forward inference and efficient per-scene fine-tuning.
EDUS exploits noisy predicted geometric priors as guidance to enable generalizable urban view synthesis from sparse input images.
Our results indicate that EDUS achieves state-of-the-art performance in sparse view settings when combined with fast test-time optimization.
arXiv Detail & Related papers (2024-07-17T08:16:25Z) - Depth Reconstruction with Neural Signed Distance Fields in Structured Light Systems [15.603880588503355]
We introduce a novel depth estimation technique for multi-frame structured light setups using neural implicit representations of 3D space.
Our approach employs a neural signed distance field (SDF), trained through self-supervised differentiable rendering.
arXiv Detail & Related papers (2024-05-20T13:24:35Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Efficient Multi-View Inverse Rendering Using a Hybrid Differentiable
Rendering Method [19.330797817738542]
We introduce a novel hybrid differentiable rendering method to efficiently reconstruct the 3D geometry and reflectance of a scene.
Our method can produce reconstructions with similar or higher quality than state-of-the-art methods while being more efficient.
arXiv Detail & Related papers (2023-08-19T12:48:10Z) - High-Resolution Volumetric Reconstruction for Clothed Humans [27.900514732877827]
We present a novel method for reconstructing clothed humans from a sparse set of, e.g., 1 to 6 RGB images.
Our method significantly reduces the mean point-to-surface (P2S) precision of state-of-the-art methods by more than 50% to achieve approximately 2mm accuracy with a 512 volume resolution.
arXiv Detail & Related papers (2023-07-25T06:37:50Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.