NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects
- URL: http://arxiv.org/abs/2303.11963v2
- Date: Thu, 4 Apr 2024 15:10:23 GMT
- Title: NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects
- Authors: Dongqing Wang, Tong Zhang, Sabine Süsstrunk,
- Abstract summary: We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
- Score: 28.62468618676557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects with complex geometry and unknown indices of refraction. Commonly used appearance modeling such as the Disney BSDF model cannot accurately address this challenging problem due to the complex light paths bending through refractions and the strong dependency of surface appearance on illumination. With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis. We leverage implicit Signed Distance Functions (SDF) to model the object geometry and propose a refraction-aware ray bending network to model the effects of light refraction within the object. Our ray bending network is more tolerant to geometric inaccuracies than traditional physically-based methods for rendering transparent objects. We provide extensive evaluations on both synthetic and real-world datasets to demonstrate our high-quality synthesis and the applicability of our method.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - PBIR-NIE: Glossy Object Capture under Non-Distant Lighting [30.325872237020395]
Glossy objects present a significant challenge for 3D reconstruction from multi-view input images under natural lighting.
We introduce PBIR-NIE, an inverse rendering framework designed to holistically capture the geometry, material attributes, and surrounding illumination of such objects.
arXiv Detail & Related papers (2024-08-13T13:26:24Z) - Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields [45.64333510966844]
Inverse rendering aims at recovering both geometry and materials of objects.
We propose a novel 5D Neural Plenoptic Function (NeP) based on NeRFs and ray tracing.
Our method can reconstruct high-fidelity geometry/materials of challenging glossy objects with complex lighting interactions from nearby objects.
arXiv Detail & Related papers (2024-03-24T16:34:47Z) - Neural Radiance Fields for Transparent Object Using Visual Hull [0.8158530638728501]
Recently introduced Neural Radiance Fields (NeRF) is a view synthesis method.
We propose a NeRF-based method consisting of the following three steps: First, we reconstruct a three-dimensional shape of a transparent object using visual hull.
Second, we simulate the refraction of the rays inside of the transparent object according to Snell's law. Last, we sample points through refracted rays and put them into NeRF.
arXiv Detail & Related papers (2023-12-13T13:15:19Z) - Neural Relighting with Subsurface Scattering by Learning the Radiance
Transfer Gradient [73.52585139592398]
We propose a novel framework for learning the radiance transfer field via volume rendering.
We will release our code and a novel light stage dataset of objects with subsurface scattering effects publicly available.
arXiv Detail & Related papers (2023-06-15T17:56:04Z) - Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container [61.50401406132946]
Transparent enclosures pose challenges of multiple light reflections and refractions at the interface between different propagation media.
We use an existing neural reconstruction method (NeuS) that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid rendering strategy that combines volume rendering with ray tracing.
arXiv Detail & Related papers (2023-03-24T04:58:27Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.