Neural Radiance Fields for Transparent Object Using Visual Hull
- URL: http://arxiv.org/abs/2312.08118v1
- Date: Wed, 13 Dec 2023 13:15:19 GMT
- Title: Neural Radiance Fields for Transparent Object Using Visual Hull
- Authors: Heechan Yoon, Seungkyu Lee
- Abstract summary: Recently introduced Neural Radiance Fields (NeRF) is a view synthesis method.
We propose a NeRF-based method consisting of the following three steps: First, we reconstruct a three-dimensional shape of a transparent object using visual hull.
Second, we simulate the refraction of the rays inside of the transparent object according to Snell's law. Last, we sample points through refracted rays and put them into NeRF.
- Score: 0.8158530638728501
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unlike opaque object, novel view synthesis of transparent object is a
challenging task, because transparent object refracts light of background
causing visual distortions on the transparent object surface along the
viewpoint change. Recently introduced Neural Radiance Fields (NeRF) is a view
synthesis method. Thanks to its remarkable performance improvement, lots of
following applications based on NeRF in various topics have been developed.
However, if an object with a different refractive index is included in a scene
such as transparent object, NeRF shows limited performance because refracted
light ray at the surface of the transparent object is not appropriately
considered. To resolve the problem, we propose a NeRF-based method consisting
of the following three steps: First, we reconstruct a three-dimensional shape
of a transparent object using visual hull. Second, we simulate the refraction
of the rays inside of the transparent object according to Snell's law. Last, we
sample points through refracted rays and put them into NeRF. Experimental
evaluation results demonstrate that our method addresses the limitation of
conventional NeRF with transparent objects.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields [23.099784003061618]
We introduce the refractive-reflective field to Neural radiance fields (NeRF)
NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection.
We propose a virtual cone supersampling technique to achieve efficient and effective anti-aliasing.
arXiv Detail & Related papers (2023-09-22T17:59:12Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container [61.50401406132946]
Transparent enclosures pose challenges of multiple light reflections and refractions at the interface between different propagation media.
We use an existing neural reconstruction method (NeuS) that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid rendering strategy that combines volume rendering with ray tracing.
arXiv Detail & Related papers (2023-03-24T04:58:27Z) - NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects [28.62468618676557]
We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects.
With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis.
arXiv Detail & Related papers (2023-03-21T15:50:08Z) - NeTO:Neural Reconstruction of Transparent Objects with Self-Occlusion
Aware Refraction-Tracing [44.22576861939435]
We present a novel method, called NeTO, for capturing 3D geometry of solid transparent objects from 2D images via volume rendering.
Our method achieves faithful reconstruction results and outperforms prior works by a large margin.
arXiv Detail & Related papers (2023-03-20T15:50:00Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Dense Reconstruction of Transparent Objects by Altering Incident Light
Paths Through Refraction [40.696591594772876]
We introduce a fixed viewpoint approach to dense surface reconstruction of transparent objects based on refraction of light.
We present a setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid.
arXiv Detail & Related papers (2021-05-20T19:01:12Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.