Rotation-Equivariant Conditional Spherical Neural Fields for Learning a
Natural Illumination Prior
- URL: http://arxiv.org/abs/2206.03858v1
- Date: Tue, 7 Jun 2022 13:02:49 GMT
- Title: Rotation-Equivariant Conditional Spherical Neural Fields for Learning a
Natural Illumination Prior
- Authors: James A. D. Gardner, Bernhard Egger, William A. P. Smith
- Abstract summary: We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network.
We train our model on a dataset of 1.6K HDR environment maps, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.
- Score: 24.82786494623801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse rendering is an ill-posed problem. Previous work has sought to
resolve this by focussing on priors for object or scene shape or appearance. In
this work, we instead focus on a prior for natural illuminations. Current
methods rely on spherical harmonic lighting or other generic representations
and, at best, a simplistic prior on the parameters. We propose a conditional
neural field representation based on a variational auto-decoder with a SIREN
network and, extending Vector Neurons, build equivariance directly into the
network. Using this we develop a rotation-equivariant, high dynamic range (HDR)
neural illumination model that is compact and able to express complex,
high-frequency features of natural environment maps. Training our model on a
curated dataset of 1.6K HDR environment maps of natural scenes, we compare it
against traditional representations, demonstrate its applicability for an
inverse rendering task and show environment map completion from partial
observations. A PyTorch implementation, our dataset and trained models can be
found at jadgardner.github.io/RENI.
Related papers
- Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions [8.20594611891252]
editing High Dynamic Range environment maps using an inverse differentiable rendering architecture is a complex inverse problem.
We introduce a novel method for editing HDR environment maps using a differentiable rendering, addressing sparsity and variance between values.
Our approach can pave the way to interesting tasks, such as estimating a new environment map given a rendering with novel light sources.
arXiv Detail & Related papers (2024-10-24T10:27:29Z) - Neural Differential Appearance Equations [14.053608981988793]
We propose a method to reproduce dynamic appearance textures with space-stationary but time-varying visual statistics.
We adopt the neural ordinary differential equation to learn the underlying dynamics of appearance from a target exemplar.
Our experiments show that our method consistently yields realistic and coherent results.
arXiv Detail & Related papers (2024-09-23T11:29:19Z) - RENI++ A Rotation-Equivariant, Scale-Invariant, Natural Illumination
Prior [22.675951948615825]
Inverse rendering is an ill-posed problem.
Current methods rely on spherical harmonic lighting or other generic representations.
We propose a conditional neural field representation based on an equial auto-decoder.
We train our model on a dataset of 1.6K HDR environment maps.
arXiv Detail & Related papers (2023-11-15T20:48:26Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Relightable and Animatable Neural Avatar from Sparse-View Video [66.77811288144156]
This paper tackles the challenge of creating relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination.
arXiv Detail & Related papers (2023-08-15T17:42:39Z) - NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient
Illumination [28.433403714053103]
We propose a framework named neural ambient illumination (NeAI)
NeAI uses Neural Radiance Fields (NeRF) as a lighting model to handle complex lighting in a physically based way.
Experiments demonstrate the superior performance of novel-view rendering compared to previous works.
arXiv Detail & Related papers (2023-04-18T06:32:30Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - Generalizable Patch-Based Neural Rendering [46.41746536545268]
We propose a new paradigm for learning models that can synthesize novel views of unseen scenes.
Our method is capable of predicting the color of a target ray in a novel scene directly, just from a collection of patches sampled from the scene.
We show that our approach outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.
arXiv Detail & Related papers (2022-07-21T17:57:04Z) - Learning Multi-Object Dynamics with Compositional Neural Radiance Fields [63.424469458529906]
We present a method to learn compositional predictive models from image observations based on implicit object encoders, Neural Radiance Fields (NeRFs), and graph neural networks.
NeRFs have become a popular choice for representing scenes due to their strong 3D prior.
For planning, we utilize RRTs in the learned latent space, where we can exploit our model and the implicit object encoder to make sampling the latent space informative and more efficient.
arXiv Detail & Related papers (2022-02-24T01:31:29Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.