SPIDR: SDF-based Neural Point Fields for Illumination and Deformation
- URL: http://arxiv.org/abs/2210.08398v3
- Date: Fri, 7 Apr 2023 05:42:33 GMT
- Title: SPIDR: SDF-based Neural Point Fields for Illumination and Deformation
- Authors: Ruofan Liang, Jiahao Zhang, Haoda Li, Chen Yang, Yushi Guan, Nandita
Vijaykumar
- Abstract summary: We introduce SPIDR, a new hybrid neural SDF representation.
We propose a novel neural implicit model to learn environment light.
We demonstrate the effectiveness of SPIDR in enabling high quality geometry editing with more accurate updates to the illumination of the scene.
- Score: 4.246563675883777
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural radiance fields (NeRFs) have recently emerged as a promising approach
for 3D reconstruction and novel view synthesis. However, NeRF-based methods
encode shape, reflectance, and illumination implicitly and this makes it
challenging for users to manipulate these properties in the rendered images
explicitly. Existing approaches only enable limited editing of the scene and
deformation of the geometry. Furthermore, no existing work enables accurate
scene illumination after object deformation. In this work, we introduce SPIDR,
a new hybrid neural SDF representation. SPIDR combines point cloud and neural
implicit representations to enable the reconstruction of higher quality object
surfaces for geometry deformation and lighting estimation. meshes and surfaces
for object deformation and lighting estimation. To more accurately capture
environment illumination for scene relighting, we propose a novel neural
implicit model to learn environment light. To enable more accurate illumination
updates after deformation, we use the shadow mapping technique to approximate
the light visibility updates caused by geometry editing. We demonstrate the
effectiveness of SPIDR in enabling high quality geometry editing with more
accurate updates to the illumination of the scene.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - SpecNeRF: Gaussian Directional Encoding for Specular Reflections [43.110815974867315]
We propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions.
Our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps.
It enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients.
arXiv Detail & Related papers (2023-12-20T15:20:25Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - NeILF: Neural Incident Light Field for Physically-based Material
Estimation [31.230609753253713]
We present a differentiable rendering framework for material and lighting estimation from multi-view images and a reconstructed geometry.
In the framework, we represent scene lightings as the Neural Incident Light Field (NeILF) and material properties as the surface BRDF modelled by multi-layer perceptrons.
arXiv Detail & Related papers (2022-03-14T15:23:04Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.