Inverse Global Illumination using a Neural Radiometric Prior
- URL: http://arxiv.org/abs/2305.02192v1
- Date: Wed, 3 May 2023 15:36:39 GMT
- Title: Inverse Global Illumination using a Neural Radiometric Prior
- Authors: Saeed Hadadan, Geng Lin, Jan Nov\'ak, Fabrice Rousselle, Matthias
Zwicker
- Abstract summary: Inverse rendering methods that account for global illumination are becoming more popular.
This paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer.
- Score: 26.29610954064107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse rendering methods that account for global illumination are becoming
more popular, but current methods require evaluating and automatically
differentiating millions of path integrals by tracing multiple light bounces,
which remains expensive and prone to noise. Instead, this paper proposes a
radiometric prior as a simple alternative to building complete path integrals
in a traditional differentiable path tracer, while still correctly accounting
for global illumination. Inspired by the Neural Radiosity technique, we use a
neural network as a radiance function, and we introduce a prior consisting of
the norm of the residual of the rendering equation in the inverse rendering
loss. We train our radiance network and optimize scene parameters
simultaneously using a loss consisting of both a photometric term between
renderings and the multi-view input images, and our radiometric prior (the
residual term). This residual term enforces a physical constraint on the
optimization that ensures that the radiance field accounts for global
illumination. We compare our method to a vanilla differentiable path tracer,
and more advanced techniques such as Path Replay Backpropagation. Despite the
simplicity of our approach, we can recover scene parameters with comparable and
in some cases better quality, at considerably lower computation times.
Related papers
- Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - Diffusion Posterior Illumination for Ambiguity-aware Inverse Rendering [63.24476194987721]
Inverse rendering, the process of inferring scene properties from images, is a challenging inverse problem.
Most existing solutions incorporate priors into the inverse-rendering pipeline to encourage plausible solutions.
We propose a novel scheme that integrates a denoising probabilistic diffusion model pre-trained on natural illumination maps into an optimization framework.
arXiv Detail & Related papers (2023-09-30T12:39:28Z) - Self-Calibrating, Fully Differentiable NLOS Inverse Rendering [15.624750787186803]
Time-resolved non-line-of-sight (NLOS) imaging methods reconstruct hidden scenes by inverting the optical paths of indirect illumination measured at visible relay surfaces.
We introduce a fully-differentiable end-to-end NLOS inverse rendering pipeline that self-calibrates the imaging parameters during the reconstruction of hidden scenes.
We demonstrate the robustness of our method to consistently reconstruct geometry and albedo, even under significant noise levels.
arXiv Detail & Related papers (2023-09-21T13:15:54Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Differentiable Neural Radiosity [28.72382947011186]
We introduce Differentiable Neural Radiosity, a novel method of representing the solution of the differential rendering equation using a neural network.
Inspired by neural radiosity techniques, we minimize the norm of the residual of the differential rendering equation to directly optimize our network.
arXiv Detail & Related papers (2022-01-31T12:53:37Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Neural Radiosity [31.35525999999182]
We introduce Neural Radiosity, an algorithm to solve the equation by minimizing the norm of its residual equation.
Our approach decouples solving the radiance equation from rendering (perspective) images, and allows us to efficiently synthesize arbitrary views of a scene.
arXiv Detail & Related papers (2021-05-26T04:10:00Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z) - Photon-Driven Neural Path Guiding [102.12596782286607]
We present a novel neural path guiding approach that can reconstruct high-quality sampling distributions for path guiding from a sparse set of samples.
We leverage photons traced from light sources as the input for sampling density reconstruction, which is highly effective for challenging scenes with strong global illumination.
Our approach achieves significantly better rendering results of testing scenes than previous state-of-the-art path guiding methods.
arXiv Detail & Related papers (2020-10-05T04:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.