Sur2f: A Hybrid Representation for High-Quality and Efficient Surface
Reconstruction from Multi-view Images
- URL: http://arxiv.org/abs/2401.03704v1
- Date: Mon, 8 Jan 2024 07:22:59 GMT
- Title: Sur2f: A Hybrid Representation for High-Quality and Efficient Surface
Reconstruction from Multi-view Images
- Authors: Zhangjin Huang, Zhihao Liang, Haojie Zhang, Yangkai Lin, Kui Jia
- Abstract summary: Multi-view surface reconstruction is an ill-posed, inverse problem in 3D vision research.
Most of the existing methods rely either on explicit meshes, or on implicit field functions, using volume rendering of the fields for reconstruction.
We propose a new hybrid representation, termed Sur2f, aiming to better benefit from both representations in a complementary manner.
- Score: 41.81291587750352
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-view surface reconstruction is an ill-posed, inverse problem in 3D
vision research. It involves modeling the geometry and appearance with
appropriate surface representations. Most of the existing methods rely either
on explicit meshes, using surface rendering of meshes for reconstruction, or on
implicit field functions, using volume rendering of the fields for
reconstruction. The two types of representations in fact have their respective
merits. In this work, we propose a new hybrid representation, termed Sur2f,
aiming to better benefit from both representations in a complementary manner.
Technically, we learn two parallel streams of an implicit signed distance field
and an explicit surrogate surface Sur2f mesh, and unify volume rendering of the
implicit signed distance function (SDF) and surface rendering of the surrogate
mesh with a shared, neural shader; the unified shading promotes their
convergence to the same, underlying surface. We synchronize learning of the
surrogate mesh by driving its deformation with functions induced from the
implicit SDF. In addition, the synchronized surrogate mesh enables
surface-guided volume sampling, which greatly improves the sampling efficiency
per ray in volume rendering. We conduct thorough experiments showing that
Sur$^2$f outperforms existing reconstruction methods and surface
representations, including hybrid ones, in terms of both recovery quality and
recovery efficiency.
Related papers
- RaNeuS: Ray-adaptive Neural Surface Reconstruction [87.20343320266215]
We leverage a differentiable radiance field eg NeRF to reconstruct detailed 3D surfaces in addition to producing novel view renderings.
Considering that different methods formulate and optimize the projection from SDF to radiance field with a globally constant Eikonal regularization, we improve with a ray-wise weighting factor.
Our proposed textitRaNeuS are extensively evaluated on both synthetic and real datasets.
arXiv Detail & Related papers (2024-06-14T07:54:25Z) - NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion [56.98287481620215]
We present a novel method for 3D surface reconstruction from multiple images where only a part of the object of interest is captured.
Our approach builds on two recent developments: surface reconstruction using neural radiance fields for the reconstruction of the visible parts of the surface, and guidance of pre-trained 2D diffusion models in the form of Score Distillation Sampling (SDS) to complete the shape in unobserved regions in a plausible manner.
arXiv Detail & Related papers (2023-12-07T19:30:55Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - C2F2NeUS: Cascade Cost Frustum Fusion for High Fidelity and
Generalizable Neural Surface Reconstruction [12.621233209149953]
We introduce a novel integration scheme that combines the multi-view stereo with neural signed distance function representations.
Our method reconstructs robust surfaces and outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2023-06-16T17:56:16Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Learning Signed Distance Field for Multi-view Surface Reconstruction [24.090786783370195]
We introduce a novel neural surface reconstruction framework that leverages the knowledge of stereo matching and feature consistency.
We apply a signed distance field (SDF) and a surface light field to represent the scene geometry and appearance respectively.
Our method is able to improve the robustness of geometry estimation and support reconstruction of complex scene topologies.
arXiv Detail & Related papers (2021-08-23T06:23:50Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z) - Coupling Explicit and Implicit Surface Representations for Generative 3D
Modeling [41.79675639550555]
We propose a novel neural architecture for representing 3D surfaces, which harnesses two complementary shape representations.
We make these two representations synergistic by introducing novel consistency losses.
Our hybrid architecture outputs results are superior to the output of the two equivalent single-representation networks.
arXiv Detail & Related papers (2020-07-20T17:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.