PANDORA: Polarization-Aided Neural Decomposition Of Radiance
- URL: http://arxiv.org/abs/2203.13458v1
- Date: Fri, 25 Mar 2022 05:41:52 GMT
- Title: PANDORA: Polarization-Aided Neural Decomposition Of Radiance
- Authors: Akshat Dave, Yongyi Zhao, Ashok Veeraraghavan
- Abstract summary: Inverse rendering is a fundamental problem in computer graphics and vision.
Recent progress in representing scene properties as coordinate-based neural networks have facilitated neural inverse rendering.
We propose PANDORA, a polarimetric inverse rendering approach based on implicit neural representations.
- Score: 20.760987175553655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing an object's geometry and appearance from multiple images, also
known as inverse rendering, is a fundamental problem in computer graphics and
vision. Inverse rendering is inherently ill-posed because the captured image is
an intricate function of unknown lighting conditions, material properties and
scene geometry. Recent progress in representing scene properties as
coordinate-based neural networks have facilitated neural inverse rendering
resulting in impressive geometry reconstruction and novel-view synthesis. Our
key insight is that polarization is a useful cue for neural inverse rendering
as polarization strongly depends on surface normals and is distinct for diffuse
and specular reflectance. With the advent of commodity, on-chip, polarization
sensors, capturing polarization has become practical. Thus, we propose PANDORA,
a polarimetric inverse rendering approach based on implicit neural
representations. From multi-view polarization images of an object, PANDORA
jointly extracts the object's 3D geometry, separates the outgoing radiance into
diffuse and specular and estimates the illumination incident on the object. We
show that PANDORA outperforms state-of-the-art radiance decomposition
techniques. PANDORA outputs clean surface reconstructions free from texture
artefacts, models strong specularities accurately and estimates illumination
under practical unstructured scenarios.
Related papers
- GUS-IR: Gaussian Splatting with Unified Shading for Inverse Rendering [83.69136534797686]
We present GUS-IR, a novel framework designed to address the inverse rendering problem for complicated scenes featuring rough and glossy surfaces.
This paper starts by analyzing and comparing two prominent shading techniques popularly used for inverse rendering, forward shading and deferred shading.
We propose a unified shading solution that combines the advantages of both techniques for better decomposition.
arXiv Detail & Related papers (2024-11-12T01:51:05Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors [8.8400072344375]
Learning surfaces from neural radiance field (NeRF) became a rising topic in Multi-View Stereo (MVS)
Recent methods demonstrated their ability to reconstruct accurate 3D shapes of Lambertian scenes.
However, their results on reflective scenes are unsatisfactory due to the entanglement of specular radiance and complicated geometry.
arXiv Detail & Related papers (2024-03-18T15:58:03Z) - SPIDeRS: Structured Polarization for Invisible Depth and Reflectance Sensing [31.605927493154656]
We introduce structured polarization for invisible depth and reflectance sensing (SPIDeRS)
The key idea is to modulate the angle of linear polarization (AoLP) of projected light at each pixel.
The use of polarization makes it invisible and lets us recover not only depth but also directly surface normals and even reflectance.
arXiv Detail & Related papers (2023-12-07T18:59:21Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Polarimetric Inverse Rendering for Transparent Shapes Reconstruction [1.807492010338763]
We propose a novel method for the detailed reconstruction of transparent objects by exploiting polarimetric cues.
We implicitly represent the object's geometry as a neural network, while the polarization render is capable of rendering the object's polarization images.
We build a polarization dataset for multi-view transparent shapes reconstruction to verify our method.
arXiv Detail & Related papers (2022-08-25T02:52:31Z) - Self-calibrating Photometric Stereo by Neural Inverse Rendering [88.67603644930466]
This paper tackles the task of uncalibrated photometric stereo for 3D object reconstruction.
We propose a new method that jointly optimize object shape, light directions, and light intensities.
Our method demonstrates state-of-the-art accuracy in light estimation and shape recovery on real-world datasets.
arXiv Detail & Related papers (2022-07-16T02:46:15Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.