HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly
implicits
- URL: http://arxiv.org/abs/2202.01829v1
- Date: Thu, 3 Feb 2022 20:20:32 GMT
- Title: HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly
implicits
- Authors: Yabin Xu and Liangliang Nan and Laishui Zhou and Jun Wang and Charlie
C.L. Wang
- Abstract summary: Reconstruction of high-fidelity 3D objects or scenes is a fundamental research problem.
Recent advances in RGB-D fusion have demonstrated the potential of producing 3D models from consumer-level RGB-D cameras.
Existing approaches suffer from the accumulation of errors in camera tracking and distortion in the reconstruction.
- Score: 11.83399015126983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstruction of high-fidelity 3D objects or scenes is a fundamental
research problem. Recent advances in RGB-D fusion have demonstrated the
potential of producing 3D models from consumer-level RGB-D cameras. However,
due to the discrete nature and limited resolution of their surface
representations (e.g., point- or voxel-based), existing approaches suffer from
the accumulation of errors in camera tracking and distortion in the
reconstruction, which leads to an unsatisfactory 3D reconstruction. In this
paper, we present a method using on-the-fly implicits of Hermite Radial Basis
Functions (HRBFs) as a continuous surface representation for camera tracking in
an existing RGB-D fusion framework. Furthermore, curvature estimation and
confidence evaluation are coherently derived from the inherent surface
properties of the on-the-fly HRBF implicits, which devote to a data fusion with
better quality. We argue that our continuous but on-the-fly surface
representation can effectively mitigate the impact of noise with its robustness
and constrain the reconstruction with inherent surface smoothness when being
compared with discrete representations. Experimental results on various
real-world and synthetic datasets demonstrate that our HRBF-fusion outperforms
the state-of-the-art approaches in terms of tracking robustness and
reconstruction accuracy.
Related papers
- G2SDF: Surface Reconstruction from Explicit Gaussians with Implicit SDFs [84.07233691641193]
We introduce G2SDF, a novel approach that integrates a neural implicit Signed Distance Field into the Gaussian Splatting framework.
G2SDF achieves superior quality than prior works while maintaining the efficiency of 3DGS.
arXiv Detail & Related papers (2024-11-25T20:07:07Z) - GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision [0.0]
Surface reconstruction from multi-view images is a core challenge in 3D vision.
Recent studies have explored signed distance fields (SDF) within Neural Radiance Fields (NeRF) to achieve high-fidelity surface reconstructions.
We introduce GSurf, a novel end-to-end method for learning a signed distance field directly from Gaussian primitives.
GSurf achieves faster training and rendering speeds while delivering 3D reconstruction quality comparable to neural implicit surface methods, such as VolSDF and NeuS.
arXiv Detail & Related papers (2024-11-24T05:55:19Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image [40.03212588672639]
ANIM is a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy.
Our model learns geometric details from both pixel-aligned and voxel-aligned features to leverage depth information.
Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input.
arXiv Detail & Related papers (2024-03-15T14:45:38Z) - AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion [25.32113731681485]
Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring.
Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework.
By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines.
arXiv Detail & Related papers (2024-02-05T18:59:31Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - D-SCo: Dual-Stream Conditional Diffusion for Monocular Hand-Held Object Reconstruction [74.49121940466675]
We introduce centroid-fixed dual-stream conditional diffusion for monocular hand-held object reconstruction.
First, to avoid the object centroid from deviating, we utilize a novel hand-constrained centroid fixing paradigm.
Second, we introduce a dual-stream denoiser to semantically and geometrically model hand-object interactions.
arXiv Detail & Related papers (2023-11-23T20:14:50Z) - Cheating Depth: Enhancing 3D Surface Anomaly Detection via Depth
Simulation [12.843938169660404]
RGB-based surface anomaly detection methods have advanced significantly.
Certain surface anomalies remain practically invisible in RGB alone, necessitating the incorporation of 3D information.
Re-training RGB backbones on industrial depth datasets is hindered by the limited availability of sufficiently large datasets.
We propose a new surface anomaly detection method 3DSR, which outperforms all existing state-of-the-art on the challenging MVTec3D anomaly detection benchmark.
arXiv Detail & Related papers (2023-11-02T09:44:21Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections [72.45512144682554]
We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
arXiv Detail & Related papers (2023-04-18T02:34:58Z) - A Combined Approach Toward Consistent Reconstructions of Indoor Spaces
Based on 6D RGB-D Odometry and KinectFusion [7.503338065129185]
We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction.
We feed the estimated pose to the highly accurate KinectFusion algorithm, which fine-tune the frame-to-frame relative pose.
Our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.
arXiv Detail & Related papers (2022-12-25T22:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.