Likelihood-Based Generative Radiance Field with Latent Space
Energy-Based Model for 3D-Aware Disentangled Image Representation
- URL: http://arxiv.org/abs/2304.07918v1
- Date: Sun, 16 Apr 2023 23:44:41 GMT
- Title: Likelihood-Based Generative Radiance Field with Latent Space
Energy-Based Model for 3D-Aware Disentangled Image Representation
- Authors: Yaxuan Zhu, Jianwen Xie, Ping Li
- Abstract summary: We propose a likelihood-based top-down 3D-aware 2D image generative model that incorporates 3D representation via Neural Radiance Fields (NeRF) and 2D imaging process via differentiable volume rendering.
Experiments on several benchmark datasets demonstrate that the NeRF-LEBM can infer 3D object structures from 2D images, generate 2D images with novel views and objects, learn from incomplete 2D images, and learn from 2D images with known or unknown camera poses.
- Score: 43.41596483002523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the NeRF-LEBM, a likelihood-based top-down 3D-aware 2D image
generative model that incorporates 3D representation via Neural Radiance Fields
(NeRF) and 2D imaging process via differentiable volume rendering. The model
represents an image as a rendering process from 3D object to 2D image and is
conditioned on some latent variables that account for object characteristics
and are assumed to follow informative trainable energy-based prior models. We
propose two likelihood-based learning frameworks to train the NeRF-LEBM: (i)
maximum likelihood estimation with Markov chain Monte Carlo-based inference and
(ii) variational inference with the reparameterization trick. We study our
models in the scenarios with both known and unknown camera poses. Experiments
on several benchmark datasets demonstrate that the NeRF-LEBM can infer 3D
object structures from 2D images, generate 2D images with novel views and
objects, learn from incomplete 2D images, and learn from 2D images with known
or unknown camera poses.
Related papers
- Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - Inpaint3D: 3D Scene Content Generation using 2D Inpainting Diffusion [18.67196713834323]
This paper presents a novel approach to inpainting 3D regions of a scene, given masked multi-view images, by distilling a 2D diffusion model into a learned 3D scene representation (e.g. a NeRF)
We show that this 2D diffusion model can still serve as a generative prior in a 3D multi-view reconstruction problem where we optimize a NeRF using a combination of score distillation sampling and NeRF reconstruction losses.
Because our method can generate content to fill any 3D masked region, we additionally demonstrate 3D object completion, 3D object replacement, and 3D scene completion
arXiv Detail & Related papers (2023-12-06T19:30:04Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - GAN2X: Non-Lambertian Inverse Rendering of Image GANs [85.76426471872855]
We present GAN2X, a new method for unsupervised inverse rendering that only uses unpaired images for training.
Unlike previous Shape-from-GAN approaches that mainly focus on 3D shapes, we take the first attempt to also recover non-Lambertian material properties by exploiting the pseudo paired data generated by a GAN.
Experiments demonstrate that GAN2X can accurately decompose 2D images to 3D shape, albedo, and specular properties for different object categories, and achieves the state-of-the-art performance for unsupervised single-view 3D face reconstruction.
arXiv Detail & Related papers (2022-06-18T16:58:49Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D object reconstruction and 6D-pose estimation from 2D shape for
robotic grasping of objects [2.330913682033217]
We propose a method for 3D object reconstruction and 6D-pose estimation from 2D images.
By computing transformation parameters directly from the 2D images, the number of free parameters required during the registration process is reduced.
In robot experiments, successful grasping of objects demonstrates its usability in real-world environments.
arXiv Detail & Related papers (2022-03-02T11:58:35Z) - FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category
Modelling [11.432178728985956]
We use Neural Radiance Fields (NeRF) to learn high quality 3D object category models from collections of input images.
We show that this method can learn accurate 3D object category models using only photometric supervision and casually captured images.
arXiv Detail & Related papers (2021-04-17T01:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.