NeMF: Inverse Volume Rendering with Neural Microflake Field
- URL: http://arxiv.org/abs/2304.00782v2
- Date: Tue, 4 Apr 2023 01:13:03 GMT
- Title: NeMF: Inverse Volume Rendering with Neural Microflake Field
- Authors: Youjia Zhang, Teng Xu, Junqing Yu, Yuteng Ye, Junle Wang, Yanqing
Jing, Jingyi Yu, Wei Yang
- Abstract summary: In this paper, we propose to conduct inverse volume rendering, in contrast to surface-based rendering.
We adopt coordinate networks to implicitly encode the microflake volume, and develop a differentiable microflake volume to train the network in an end-to-end way.
- Score: 30.15831015284247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering the physical attributes of an object's appearance from its images
captured under an unknown illumination is challenging yet essential for
photo-realistic rendering. Recent approaches adopt the emerging implicit scene
representations and have shown impressive results.However, they unanimously
adopt a surface-based representation,and hence can not well handle scenes with
very complex geometry, translucent object and etc. In this paper, we propose to
conduct inverse volume rendering, in contrast to surface-based, by representing
a scene using microflake volume, which assumes the space is filled with
infinite small flakes and light reflects or scatters at each spatial location
according to microflake distributions. We further adopt the coordinate networks
to implicitly encode the microflake volume, and develop a differentiable
microflake volume renderer to train the network in an end-to-end way in
principle.Our NeMF enables effective recovery of appearance attributes for
highly complex geometry and scattering object, enables high-quality relighting,
material editing, and especially simulates volume rendering effects, such as
scattering, which is infeasible for surface-based approaches.
Related papers
- ROSA: Reconstructing Object Shape and Appearance Textures by Adaptive Detail Transfer [3.5884936187733403]
We present an inverse rendering method that directly optimize mesh geometry with spatially adaptive mesh resolution solely based on the image data.
In particular, we refine the mesh and locally condition the surface smoothness based on the estimated normal texture and mesh curvature.
In addition, we enable the reconstruction of fine appearance details in high-resolution textures through a pioneering tile-based method.
arXiv Detail & Related papers (2025-01-30T18:59:54Z) - Learning Topology Uniformed Face Mesh by Volume Rendering for Multi-view Reconstruction [40.45683488053611]
Face meshes in consistent topology serve as the foundation for many face-related applications.
We propose a mesh volume rendering method that enables directly optimizing mesh geometry while preserving topology.
Key innovation lies in spreading sparse mesh features into the surrounding space to simulate radiance field required for volume rendering.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - NePF: Neural Photon Field for Single-Stage Inverse Rendering [6.977356702921476]
We present a novel single-stage framework, Neural Photon Field (NePF), to address the ill-posed inverse rendering from multi-view images.
NePF achieves this unification by fully utilizing the physical implication behind the weight function of neural implicit surfaces.
We evaluate our method on both real and synthetic datasets.
arXiv Detail & Related papers (2023-11-20T06:15:46Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - TraM-NeRF: Tracing Mirror and Near-Perfect Specular Reflections through
Neural Radiance Fields [3.061835990893184]
Implicit representations like Neural Radiance Fields (NeRF) showed impressive results for rendering of complex scenes with fine details.
We present a novel reflection tracing method tailored for the involved volume rendering within NeRF.
We derive efficient strategies for importance sampling and the transmittance computation along rays from only few samples.
arXiv Detail & Related papers (2023-10-16T17:59:56Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Neural Microfacet Fields for Inverse Rendering [54.15870869037466]
We present a method for recovering materials, geometry, and environment illumination from images of a scene.
Our method uses a microfacet reflectance model within a volumetric setting by treating each sample along the ray as a (potentially non-opaque) surface.
arXiv Detail & Related papers (2023-03-31T05:38:13Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - PhySG: Inverse Rendering with Spherical Gaussians for Physics-based
Material Editing and Relighting [60.75436852495868]
We present PhySG, an inverse rendering pipeline that reconstructs geometry, materials, and illumination from scratch from RGB input images.
We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination.
arXiv Detail & Related papers (2021-04-01T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.