Sampling Neural Radiance Fields for Refractive Objects
- URL: http://arxiv.org/abs/2211.14799v1
- Date: Sun, 27 Nov 2022 11:43:21 GMT
- Title: Sampling Neural Radiance Fields for Refractive Objects
- Authors: Jen-I Pan, Jheng-Wei Su, Kai-Wen Hsiao, Ting-Yu Yen, Hung-Kuo Chu
- Abstract summary: In this work, the scene is instead a heterogeneous volume with a piecewise-constant refractive index, where the path will be curved if it intersects the different refractive indices.
For novel view synthesis of refractive objects, our NeRF-based framework aims to optimize the radiance fields of bounded volume and boundary from multi-view posed images with refractive object silhouettes.
Given the refractive index, we extend the stratified and hierarchical sampling techniques in NeRF to allow drawing samples along a curved path tracked by the Eikonal equation.
- Score: 8.539183778516795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, differentiable volume rendering in neural radiance fields (NeRF)
has gained a lot of popularity, and its variants have attained many impressive
results. However, existing methods usually assume the scene is a homogeneous
volume so that a ray is cast along the straight path. In this work, the scene
is instead a heterogeneous volume with a piecewise-constant refractive index,
where the path will be curved if it intersects the different refractive
indices. For novel view synthesis of refractive objects, our NeRF-based
framework aims to optimize the radiance fields of bounded volume and boundary
from multi-view posed images with refractive object silhouettes. To tackle this
challenging problem, the refractive index of a scene is reconstructed from
silhouettes. Given the refractive index, we extend the stratified and
hierarchical sampling techniques in NeRF to allow drawing samples along a
curved path tracked by the Eikonal equation. The results indicate that our
framework outperforms the state-of-the-art method both quantitatively and
qualitatively, demonstrating better performance on the perceptual similarity
metric and an apparent improvement in the rendering quality on several
synthetic and real scenes.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - NeRF Revisited: Fixing Quadrature Instability in Volume Rendering [8.82933411096762]
Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum.
The rendered result is unstable w.r.t. the choice of samples along the ray, a phenomenon that we dub quadrature instability.
We propose a mathematically principled solution by reformulating the sample-based rendering equation so that it corresponds to the exact integral under piecewise linear volume density.
arXiv Detail & Related papers (2023-10-31T17:49:48Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Differentiable Rendering with Reparameterized Volume Sampling [2.717399369766309]
In view synthesis, a neural radiance field approximates underlying density and radiance fields based on a sparse set of scene pictures.
This rendering algorithm is fully differentiable and facilitates gradient-based optimization of the fields.
We propose a simple end-to-end differentiable sampling algorithm based on inverse transform sampling.
arXiv Detail & Related papers (2023-02-21T19:56:50Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.