Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections
- URL: http://arxiv.org/abs/2304.08706v1
- Date: Tue, 18 Apr 2023 02:34:58 GMT
- Title: Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections
- Authors: Jiaxiong Qiu, Peng-Tao Jiang, Yifan Zhu, Ze-Xin Yin, Ming-Ming Cheng,
Bo Ren
- Abstract summary: We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
- Score: 72.45512144682554
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural implicit methods have achieved high-quality 3D object surfaces under
slight specular highlights. However, high specular reflections (HSR) often
appear in front of target objects when we capture them through glasses. The
complex ambiguity in these scenes violates the multi-view consistency, then
makes it challenging for recent methods to reconstruct target objects
correctly. To remedy this issue, we present a novel surface reconstruction
framework, NeuS-HSR, based on implicit neural rendering. In NeuS-HSR, the
object surface is parameterized as an implicit signed distance function (SDF).
To reduce the interference of HSR, we propose decomposing the rendered image
into two appearances: the target object and the auxiliary plane. We design a
novel auxiliary plane module by combining physical assumptions and neural
networks to generate the auxiliary plane appearance. Extensive experiments on
synthetic and real-world datasets demonstrate that NeuS-HSR outperforms
state-of-the-art approaches for accurate and robust target surface
reconstruction against HSR. Code is available at
https://github.com/JiaxiongQ/NeuS-HSR.
Related papers
- High-Fidelity Mask-free Neural Surface Reconstruction for Virtual Reality [6.987660269386849]
Hi-NeuS is a novel rendering-based framework for neural implicit surface reconstruction.
Our approach has been validated through NeuS and its variant Neuralangelo.
arXiv Detail & Related papers (2024-09-20T02:07:49Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces [40.489487738598825]
In recent years, neural implicit surface reconstruction has emerged as a popular paradigm for multi-view 3D reconstruction.
Previous work ObjectSDF introduced a nice framework of object-composition neural implicit surfaces.
We propose a new framework called ObjectSDF++ to overcome the limitations of ObjectSDF.
arXiv Detail & Related papers (2023-08-15T16:35:40Z) - VolRecon: Volume Rendering of Signed Ray Distance Functions for
Generalizable Multi-View Reconstruction [64.09702079593372]
VolRecon is a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF)
On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction.
arXiv Detail & Related papers (2022-12-15T18:59:54Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.