Learning Object-Centric Neural Scattering Functions for Free-Viewpoint
Relighting and Scene Composition
- URL: http://arxiv.org/abs/2303.06138v4
- Date: Wed, 4 Oct 2023 03:17:03 GMT
- Title: Learning Object-Centric Neural Scattering Functions for Free-Viewpoint
Relighting and Scene Composition
- Authors: Hong-Xing Yu, Michelle Guo, Alireza Fathi, Yen-Yu Chang, Eric Ryan
Chan, Ruohan Gao, Thomas Funkhouser, Jiajun Wu
- Abstract summary: We propose Object-Centric Neural Scattering Functions for learning to reconstruct object appearance from only images.
OSFs support free-viewpoint object relighting, but also can model both opaque and translucent objects.
Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects.
- Score: 28.533032162292297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealistic object appearance modeling from 2D images is a constant topic
in vision and graphics. While neural implicit methods (such as Neural Radiance
Fields) have shown high-fidelity view synthesis results, they cannot relight
the captured objects. More recent neural inverse rendering approaches have
enabled object relighting, but they represent surface properties as simple
BRDFs, and therefore cannot handle translucent objects. We propose
Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct
object appearance from only images. OSFs not only support free-viewpoint object
relighting, but also can model both opaque and translucent objects. While
accurately modeling subsurface light transport for translucent objects can be
highly complex and even intractable for neural methods, OSFs learn to
approximate the radiance transfer from a distant light to an outgoing direction
at any spatial location. This approximation avoids explicitly modeling complex
subsurface scattering, making learning a neural implicit model tractable.
Experiments on real and synthetic data show that OSFs accurately reconstruct
appearances for both opaque and translucent objects, allowing faithful
free-viewpoint relighting as well as scene composition.
Related papers
- Relighting Scenes with Object Insertions in Neural Radiance Fields [24.18050535794117]
We propose a novel NeRF-based pipeline for inserting object NeRFs into scene NeRFs.
The proposed method achieves realistic relighting effects in extensive experimental evaluations.
arXiv Detail & Related papers (2024-06-21T00:58:58Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields [45.64333510966844]
Inverse rendering aims at recovering both geometry and materials of objects.
We propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing.
Our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects.
arXiv Detail & Related papers (2024-03-24T16:34:47Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Object-Centric Neural Scene Rendering [19.687759175741824]
We present a method for composing photorealistic scenes from captured images of objects.
Our work builds upon neural radiance fields (NeRFs), which implicitly model the volumetric density and directionally-emitted radiance of a scene.
We learn object-centric neural scattering functions (OSFs), a representation that models per-object light transport implicitly using a lighting- and view-dependent neural network.
arXiv Detail & Related papers (2020-12-15T18:55:02Z) - DeFMO: Deblurring and Shape Recovery of Fast Moving Objects [139.67524021201103]
generative model embeds an image of the blurred object into a latent space representation, disentangles the background, and renders the sharp appearance.
DeFMO outperforms the state of the art and generates high-quality temporal super-resolution frames.
arXiv Detail & Related papers (2020-12-01T16:02:04Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.