HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly
implicits
- URL: http://arxiv.org/abs/2202.01829v1
- Date: Thu, 3 Feb 2022 20:20:32 GMT
- Title: HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly
implicits
- Authors: Yabin Xu and Liangliang Nan and Laishui Zhou and Jun Wang and Charlie
C.L. Wang
- Abstract summary: Reconstruction of high-fidelity 3D objects or scenes is a fundamental research problem.
Recent advances in RGB-D fusion have demonstrated the potential of producing 3D models from consumer-level RGB-D cameras.
Existing approaches suffer from the accumulation of errors in camera tracking and distortion in the reconstruction.
- Score: 11.83399015126983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstruction of high-fidelity 3D objects or scenes is a fundamental
research problem. Recent advances in RGB-D fusion have demonstrated the
potential of producing 3D models from consumer-level RGB-D cameras. However,
due to the discrete nature and limited resolution of their surface
representations (e.g., point- or voxel-based), existing approaches suffer from
the accumulation of errors in camera tracking and distortion in the
reconstruction, which leads to an unsatisfactory 3D reconstruction. In this
paper, we present a method using on-the-fly implicits of Hermite Radial Basis
Functions (HRBFs) as a continuous surface representation for camera tracking in
an existing RGB-D fusion framework. Furthermore, curvature estimation and
confidence evaluation are coherently derived from the inherent surface
properties of the on-the-fly HRBF implicits, which devote to a data fusion with
better quality. We argue that our continuous but on-the-fly surface
representation can effectively mitigate the impact of noise with its robustness
and constrain the reconstruction with inherent surface smoothness when being
compared with discrete representations. Experimental results on various
real-world and synthetic datasets demonstrate that our HRBF-fusion outperforms
the state-of-the-art approaches in terms of tracking robustness and
reconstruction accuracy.
Related papers
- Normal-guided Detail-Preserving Neural Implicit Functions for High-Fidelity 3D Surface Reconstruction [6.4279213810512665]
Current methods for learning neural implicit representations from RGB or RGBD images produce 3D surfaces with missing parts and details.
This paper demonstrates that training neural representations with first-order differential properties, i.e. surface normals, leads to highly accurate 3D surface reconstruction.
arXiv Detail & Related papers (2024-06-07T11:48:47Z) - Gaussian Opacity Fields: Efficient and Compact Surface Reconstruction in Unbounded Scenes [50.92217884840301]
Gaussian Opacity Fields (GOF) is a novel approach for efficient, high-quality, and compact surface reconstruction in scenes.
GOF is derived from ray-tracing-based volume rendering of 3D Gaussians.
GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2024-04-16T17:57:19Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work shows the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
Experiments are quantitatively analyzed on the CO3D dataset, showcasing increased performance over baselines.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image [40.03212588672639]
ANIM is a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy.
Our model learns geometric details from both pixel-aligned and voxel-aligned features to leverage depth information.
Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input.
arXiv Detail & Related papers (2024-03-15T14:45:38Z) - AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion [25.32113731681485]
Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring.
Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework.
By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines.
arXiv Detail & Related papers (2024-02-05T18:59:31Z) - D-SCo: Dual-Stream Conditional Diffusion for Monocular Hand-Held Object Reconstruction [74.49121940466675]
We introduce centroid-fixed dual-stream conditional diffusion for monocular hand-held object reconstruction.
First, to avoid the object centroid from deviating, we utilize a novel hand-constrained centroid fixing paradigm.
Second, we introduce a dual-stream denoiser to semantically and geometrically model hand-object interactions.
arXiv Detail & Related papers (2023-11-23T20:14:50Z) - Cheating Depth: Enhancing 3D Surface Anomaly Detection via Depth
Simulation [12.843938169660404]
RGB-based surface anomaly detection methods have advanced significantly.
Certain surface anomalies remain practically invisible in RGB alone, necessitating the incorporation of 3D information.
Re-training RGB backbones on industrial depth datasets is hindered by the limited availability of sufficiently large datasets.
We propose a new surface anomaly detection method 3DSR, which outperforms all existing state-of-the-art on the challenging MVTec3D anomaly detection benchmark.
arXiv Detail & Related papers (2023-11-02T09:44:21Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid
Representation and Normal Prior Enhancement [53.10080345190996]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections [72.45512144682554]
We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
arXiv Detail & Related papers (2023-04-18T02:34:58Z) - A Combined Approach Toward Consistent Reconstructions of Indoor Spaces
Based on 6D RGB-D Odometry and KinectFusion [7.503338065129185]
We propose a 6D RGB-D odometry approach that finds the relative camera pose between consecutive RGB-D frames by keypoint extraction.
We feed the estimated pose to the highly accurate KinectFusion algorithm, which fine-tune the frame-to-frame relative pose.
Our algorithm outputs a ready-to-use polygon mesh (highly suitable for creating 3D virtual worlds) without any postprocessing steps.
arXiv Detail & Related papers (2022-12-25T22:52:25Z) - Total Scale: Face-to-Body Detail Reconstruction from Sparse RGBD Sensors [52.38220261632204]
Flat facial surfaces frequently occur in the PIFu-based reconstruction results.
We propose a two-scale PIFu representation to enhance the quality of the reconstructed facial details.
Experiments demonstrate the effectiveness of our approach in vivid facial details and deforming body shapes.
arXiv Detail & Related papers (2021-12-03T18:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.