NeRF, meet differential geometry!
- URL: http://arxiv.org/abs/2206.14938v1
- Date: Wed, 29 Jun 2022 22:45:34 GMT
- Title: NeRF, meet differential geometry!
- Authors: Thibaud Ehret, Roger Mar\'i, Gabriele Facciolo
- Abstract summary: We show how differential geometry can provide regularization tools for robustly training NeRF-like models.
We show how these tools yield a direct mathematical formalism of previously proposed NeRF variants aimed at improving the performance in challenging conditions.
- Score: 10.269997499911668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields, or NeRF, represent a breakthrough in the field of
novel view synthesis and 3D modeling of complex scenes from multi-view image
collections. Numerous recent works have been focusing on making the models more
robust, by means of regularization, so as to be able to train with possibly
inconsistent and/or very sparse data. In this work, we scratch the surface of
how differential geometry can provide regularization tools for robustly
training NeRF-like models, which are modified so as to represent continuous and
infinitely differentiable functions. In particular, we show how these tools
yield a direct mathematical formalism of previously proposed NeRF variants
aimed at improving the performance in challenging conditions (i.e. RegNeRF).
Based on this, we show how the same formalism can be used to natively encourage
the regularity of surfaces (by means of Gaussian and Mean Curvatures) making it
possible, for example, to learn surfaces from a very limited number of views.
Related papers
- SphereDiffusion: Spherical Geometry-Aware Distortion Resilient Diffusion Model [63.685132323224124]
Controllable spherical panoramic image generation holds substantial applicative potential across a variety of domains.
In this paper, we introduce a novel framework of SphereDiffusion to address these unique challenges.
Experiments on Structured3D dataset show that SphereDiffusion significantly improves the quality of controllable spherical image generation and relatively reduces around 35% FID on average.
arXiv Detail & Related papers (2024-03-15T06:26:46Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - Mask-Based Modeling for Neural Radiance Fields [20.728248301818912]
In this work, we unveil that 3D implicit representation learning can be significantly improved by mask-based modeling.
We propose MRVM-NeRF, which is a self-supervised pretraining target to predict complete scene representations from partially masked features along each ray.
With this pretraining target, MRVM-NeRF enables better use of correlations across different points and views as the geometry priors.
arXiv Detail & Related papers (2023-04-11T04:12:31Z) - Dynamic Point Fields [30.029872787758705]
We present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks.
We show the advantages of our dynamic point field framework in terms of its representational power, learning efficiency, and robustness to out-of-distribution novel poses.
arXiv Detail & Related papers (2023-04-05T17:52:37Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - NeRF-VAE: A Geometry Aware 3D Scene Generative Model [14.593550382914767]
We propose NeRF-VAE, a 3D scene generative model that incorporates geometric structure via NeRF and differentiable volume rendering.
NeRF-VAE's explicit 3D rendering process contrasts previous generative models with convolution-based rendering.
We show that, once trained, NeRF-VAE is able to infer and render geometrically-consistent scenes from previously unseen 3D environments.
arXiv Detail & Related papers (2021-04-01T16:16:31Z) - Nerfies: Deformable Neural Radiance Fields [44.923025540903886]
We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones.
Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF.
We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.
arXiv Detail & Related papers (2020-11-25T18:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.