Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis
- URL: http://arxiv.org/abs/2404.12819v1
- Date: Fri, 19 Apr 2024 11:56:29 GMT
- Title: Unveiling the Ambiguity in Neural Inverse Rendering: A Parameter Compensation Analysis
- Authors: Georgios Kouros, Minye Wu, Sushruth Nagesh, Xianling Zhang, Tinne Tuytelaars,
- Abstract summary: Inverse rendering aims to reconstruct the scene properties of objects solely from multiview images.
In this paper, we utilize Neural Microfacet Fields (NMF), a state-of-the-art neural inverse rendering method to illustrate the inherent ambiguity.
- Score: 36.353019226575576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse rendering aims to reconstruct the scene properties of objects solely from multiview images. However, it is an ill-posed problem prone to producing ambiguous estimations deviating from physically accurate representations. In this paper, we utilize Neural Microfacet Fields (NMF), a state-of-the-art neural inverse rendering method to illustrate the inherent ambiguity. We propose an evaluation framework to assess the degree of compensation or interaction between the estimated scene properties, aiming to explore the mechanisms behind this ill-posed problem and potential mitigation strategies. Specifically, we introduce artificial perturbations to one scene property and examine how adjusting another property can compensate for these perturbations. To facilitate such experiments, we introduce a disentangled NMF where material properties are independent. The experimental findings underscore the intrinsic ambiguity present in neural inverse rendering and highlight the importance of providing additional guidance through geometry, material, and illumination priors.
Related papers
- NeISF: Neural Incident Stokes Field for Geometry and Material Estimation [50.588983686271284]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints.
We propose Neural Incident Stokes Fields (NeISF), a multi-view inverse framework that reduces ambiguities using polarization cues.
arXiv Detail & Related papers (2023-11-22T06:28:30Z) - Inverse Rendering of Translucent Objects using Physical and Neural
Renderers [13.706425832518093]
In this work, we propose an inverse model that estimates 3D shape, spatially-varying reflectance, homogeneous scattering parameters, and an environment illumination jointly from only a pair of captured images of a translucent object.
Because two reconstructions are differentiable, we can compute a reconstruction loss to assist parameter estimation.
We constructed a large-scale synthetic dataset of translucent objects, which consists of 117K scenes.
arXiv Detail & Related papers (2023-05-15T04:03:11Z) - IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable
Novel View Synthesis [90.03590032170169]
We present intrinsic neural radiance fields, dubbed IntrinsicNeRF, which introduce intrinsic decomposition into the NeRF-based neural rendering method.
Our experiments and editing samples on both object-specific/room-scale scenes and synthetic/real-word data demonstrate that we can obtain consistent intrinsic decomposition results.
arXiv Detail & Related papers (2022-10-02T22:45:11Z) - RISP: Rendering-Invariant State Predictor with Differentiable Simulation
and Rendering for Cross-Domain Parameter Estimation [110.4255414234771]
Existing solutions require massive training data or lack generalizability to unknown rendering configurations.
We propose a novel approach that marries domain randomization and differentiable rendering gradients to address this problem.
Our approach achieves significantly lower reconstruction errors and has better generalizability among unknown rendering configurations.
arXiv Detail & Related papers (2022-05-11T17:59:51Z) - Occlusion Fields: An Implicit Representation for Non-Line-of-Sight
Surface Reconstruction [3.0553868534759725]
Non-line-of-sight reconstruction (NLoS) aims to recover objects outside the field of view from measurements of light that is indirectly scattered off a directly visible, diffuse wall.
We propose a new representation and reconstruction technique for NLoS scenes that unifies the treatment of recoverability with the reconstruction itself.
arXiv Detail & Related papers (2022-03-16T14:47:45Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Interpreting Deep Neural Networks with Relative Sectional Propagation by
Analyzing Comparative Gradients and Hostile Activations [37.11665902583138]
We propose a new attribution method, Relative Sectional Propagation (RSP), for decomposing the output predictions of Deep Neural Networks (DNNs)
We define hostile factor as an element that interferes with finding the attributions of the target and propagates it in a distinguishable way to overcome the non-suppressed nature of activated neurons.
Our method makes it possible to decompose the predictions of DNNs with clearer class-discriminativeness and detailed elucidations of activation neurons compared to the conventional attribution methods.
arXiv Detail & Related papers (2020-12-07T03:11:07Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Reconstruction Bottlenecks in Object-Centric Generative Models [24.430685026986524]
We investigate the role of "reconstruction bottlenecks" for scene decomposition in GENESIS, a recent VAE-based model.
We show such bottlenecks determine reconstruction and segmentation quality and critically influence model behaviour.
arXiv Detail & Related papers (2020-07-13T08:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.