RENI++ A Rotation-Equivariant, Scale-Invariant, Natural Illumination
Prior
- URL: http://arxiv.org/abs/2311.09361v1
- Date: Wed, 15 Nov 2023 20:48:26 GMT
- Title: RENI++ A Rotation-Equivariant, Scale-Invariant, Natural Illumination
Prior
- Authors: James A. D. Gardner, Bernhard Egger, William A. P. Smith
- Abstract summary: Inverse rendering is an ill-posed problem.
Current methods rely on spherical harmonic lighting or other generic representations.
We propose a conditional neural field representation based on an equial auto-decoder.
We train our model on a dataset of 1.6K HDR environment maps.
- Score: 22.675951948615825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inverse rendering is an ill-posed problem. Previous work has sought to
resolve this by focussing on priors for object or scene shape or appearance. In
this work, we instead focus on a prior for natural illuminations. Current
methods rely on spherical harmonic lighting or other generic representations
and, at best, a simplistic prior on the parameters. This results in limitations
for the inverse setting in terms of the expressivity of the illumination
conditions, especially when taking specular reflections into account. We
propose a conditional neural field representation based on a variational
auto-decoder and a transformer decoder. We extend Vector Neurons to build
equivariance directly into our architecture, and leveraging insights from depth
estimation through a scale-invariant loss function, we enable the accurate
representation of High Dynamic Range (HDR) images. The result is a compact,
rotation-equivariant HDR neural illumination model capable of capturing
complex, high-frequency features in natural environment maps. Training our
model on a curated dataset of 1.6K HDR environment maps of natural scenes, we
compare it against traditional representations, demonstrate its applicability
for an inverse rendering task and show environment map completion from partial
observations. We share our PyTorch implementation, dataset and trained models
at https://github.com/JADGardner/ns_reni
Related papers
- Environment Maps Editing using Inverse Rendering and Adversarial Implicit Functions [8.20594611891252]
editing High Dynamic Range environment maps using an inverse differentiable rendering architecture is a complex inverse problem.
We introduce a novel method for editing HDR environment maps using a differentiable rendering, addressing sparsity and variance between values.
Our approach can pave the way to interesting tasks, such as estimating a new environment map given a rendering with novel light sources.
arXiv Detail & Related papers (2024-10-24T10:27:29Z) - SpecNeRF: Gaussian Directional Encoding for Specular Reflections [43.110815974867315]
We propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions.
Our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps.
It enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients.
arXiv Detail & Related papers (2023-12-20T15:20:25Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Relightable and Animatable Neural Avatar from Sparse-View Video [66.77811288144156]
This paper tackles the challenge of creating relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination.
arXiv Detail & Related papers (2023-08-15T17:42:39Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects [63.04781030984006]
Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
arXiv Detail & Related papers (2023-03-25T11:03:53Z) - CROSSFIRE: Camera Relocalization On Self-Supervised Features from an
Implicit Representation [3.565151496245487]
We use Neural Radiance Fields as an implicit map of a given scene and propose a camera relocalization tailored for this representation.
The proposed method enables to compute in real-time the precise position of a device using a single RGB camera, during its navigation.
arXiv Detail & Related papers (2023-03-08T20:22:08Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Rotation-Equivariant Conditional Spherical Neural Fields for Learning a
Natural Illumination Prior [24.82786494623801]
We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network.
We train our model on a dataset of 1.6K HDR environment maps, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.
arXiv Detail & Related papers (2022-06-07T13:02:49Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - Deep Lighting Environment Map Estimation from Spherical Panoramas [0.0]
We present a data-driven model that estimates an HDR lighting environment map from a single LDR monocular spherical panorama.
We exploit the availability of surface geometry to employ image-based relighting as a data generator and supervision mechanism.
arXiv Detail & Related papers (2020-05-16T14:23:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.