HyperNeRFGAN: Hypernetwork approach to 3D NeRF GAN
- URL: http://arxiv.org/abs/2301.11631v1
- Date: Fri, 27 Jan 2023 10:21:18 GMT
- Title: HyperNeRFGAN: Hypernetwork approach to 3D NeRF GAN
- Authors: Adam Kania, Artur Kasymov, Maciej Zi\k{e}ba, Przemys{\l}aw Spurek
- Abstract summary: We propose a generative model called HyperNeRFGAN, which uses hypernetworks paradigm to produce 3D objects represented by NeRF.
Our architecture produces 2D images, but we use 3D-aware NeRF representation, which forces the model to produce correct 3D objects.
- Score: 3.479254848034425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, generative models for 3D objects are gaining much popularity in VR
and augmented reality applications. Training such models using standard 3D
representations, like voxels or point clouds, is challenging and requires
complex tools for proper color rendering. In order to overcome this limitation,
Neural Radiance Fields (NeRFs) offer a state-of-the-art quality in synthesizing
novel views of complex 3D scenes from a small subset of 2D images.
In the paper, we propose a generative model called HyperNeRFGAN, which uses
hypernetworks paradigm to produce 3D objects represented by NeRF. Our GAN
architecture leverages a hypernetwork paradigm to transfer gaussian noise into
weights of NeRF model. The model is further used to render 2D novel views, and
a classical 2D discriminator is utilized for training the entire GAN-based
structure. Our architecture produces 2D images, but we use 3D-aware NeRF
representation, which forces the model to produce correct 3D objects. The
advantage of the model over existing approaches is that it produces a dedicated
NeRF representation for the object without sharing some global parameters of
the rendering component. We show the superiority of our approach compared to
reference baselines on three challenging datasets from various domains.
Related papers
- Magnituder Layers for Implicit Neural Representations in 3D [23.135779936528333]
We introduce a novel neural network layer called the "magnituder"
By integrating magnituders into standard feed-forward layer stacks, we achieve improved inference speed and adaptability.
Our approach enables a zero-shot performance boost in trained implicit neural representation models.
arXiv Detail & Related papers (2024-10-13T08:06:41Z) - SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization [16.460851701725392]
We present a novel approach that optimize radiance fields with scene graphs to mitigate the influence of outlier poses.
Our method incorporates an adaptive inlier-outlier confidence estimation scheme based on scene graphs.
We also introduce an effective intersection-over-union (IoU) loss to optimize the camera pose and surface geometry.
arXiv Detail & Related papers (2024-07-17T15:50:17Z) - HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation [4.53411151619456]
We propose a few-shot learning approach based on the hypernetwork paradigm that does not require gradient optimization during inference.
We have developed an efficient method for generating a high-quality 3D object representation from a small number of images in a single step.
arXiv Detail & Related papers (2024-02-02T16:10:29Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.