Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation
- URL: http://arxiv.org/abs/2303.09152v1
- Date: Thu, 16 Mar 2023 08:34:02 GMT
- Title: Learning a Room with the Occ-SDF Hybrid: Signed Distance Function
Mingled with Occupancy Aids Scene Representation
- Authors: Xiaoyang Lyu, Peng Dai, Zizhang Li, Dongyu Yan, Yi Lin, Yifan Peng,
Xiaojuan Qi
- Abstract summary: Implicit neural rendering, which uses signed distance function representation with geometric priors, has led to impressive progress in the surface reconstruction of large-scale scenes.
We conduct experiments to identify limitations of the original color rendering loss and priors-embedded SDF scene representation.
We propose a feature-based color rendering loss that utilizes non-zero feature values to bring back optimization signals.
- Score: 46.635542063913185
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Implicit neural rendering, which uses signed distance function (SDF)
representation with geometric priors (such as depth or surface normal), has led
to impressive progress in the surface reconstruction of large-scale scenes.
However, applying this method to reconstruct a room-level scene from images may
miss structures in low-intensity areas or small and thin objects. We conducted
experiments on three datasets to identify limitations of the original color
rendering loss and priors-embedded SDF scene representation.
We found that the color rendering loss results in optimization bias against
low-intensity areas, causing gradient vanishing and leaving these areas
unoptimized. To address this issue, we propose a feature-based color rendering
loss that utilizes non-zero feature values to bring back optimization signals.
Additionally, the SDF representation can be influenced by objects along a ray
path, disrupting the monotonic change of SDF values when a single object is
present. To counteract this, we explore using the occupancy representation,
which encodes each point separately and is unaffected by objects along a
querying ray. Our experimental results demonstrate that the joint forces of the
feature-based rendering loss and Occ-SDF hybrid representation scheme can
provide high-quality reconstruction results, especially in challenging
room-level scenarios. The code would be released.
Related papers
- RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering [26.988572852463815]
In this paper, we propose a novel end-to-end relightable neural inverse rendering system.
Our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting.
arXiv Detail & Related papers (2024-09-30T09:42:10Z) - Ray-Distance Volume Rendering for Neural Scene Reconstruction [15.125703603989715]
Existing methods in neural scene reconstruction utilize the Signed Distance Function (SDF) to model the density function.
In indoor scenes, the density computed from the SDF for a sampled point may not consistently reflect its real importance in volume rendering.
This work proposes a novel approach for indoor scene reconstruction, which instead parameterizes the density function with the Signed Ray Distance Function (SRDF)
arXiv Detail & Related papers (2024-08-28T04:19:14Z) - NeuRodin: A Two-stage Framework for High-Fidelity Neural Surface Reconstruction [63.85586195085141]
Signed Distance Function (SDF)-based volume rendering has demonstrated significant capabilities in surface reconstruction.
We introduce NeuRodin, a novel two-stage neural surface reconstruction framework.
NeuRodin achieves high-fidelity surface reconstruction and retains the flexible optimization characteristics of density-based methods.
arXiv Detail & Related papers (2024-08-19T17:36:35Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces [40.489487738598825]
In recent years, neural implicit surface reconstruction has emerged as a popular paradigm for multi-view 3D reconstruction.
Previous work ObjectSDF introduced a nice framework of object-composition neural implicit surfaces.
We propose a new framework called ObjectSDF++ to overcome the limitations of ObjectSDF.
arXiv Detail & Related papers (2023-08-15T16:35:40Z) - Inverse Rendering of Translucent Objects using Physical and Neural
Renderers [13.706425832518093]
In this work, we propose an inverse model that estimates 3D shape, spatially-varying reflectance, homogeneous scattering parameters, and an environment illumination jointly from only a pair of captured images of a translucent object.
Because two reconstructions are differentiable, we can compute a reconstruction loss to assist parameter estimation.
We constructed a large-scale synthetic dataset of translucent objects, which consists of 117K scenes.
arXiv Detail & Related papers (2023-05-15T04:03:11Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.