Generative Occupancy Fields for 3D Surface-Aware Image Synthesis
- URL: http://arxiv.org/abs/2111.00969v1
- Date: Mon, 1 Nov 2021 14:20:43 GMT
- Title: Generative Occupancy Fields for 3D Surface-Aware Image Synthesis
- Authors: Xudong Xu, Xingang Pan, Dahua Lin, Bo Dai
- Abstract summary: Generative Occupancy Fields (GOF) is a novel model based on generative radiance fields.
GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces.
- Score: 123.11969582055382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of generative radiance fields has significantly promoted the
development of 3D-aware image synthesis. The cumulative rendering process in
radiance fields makes training these generative models much easier since
gradients are distributed over the entire volume, but leads to diffused object
surfaces. In the meantime, compared to radiance fields occupancy
representations could inherently ensure deterministic surfaces. However, if we
directly apply occupancy representations to generative models, during training
they will only receive sparse gradients located on object surfaces and
eventually suffer from the convergence problem. In this paper, we propose
Generative Occupancy Fields (GOF), a novel model based on generative radiance
fields that can learn compact object surfaces without impeding its training
convergence. The key insight of GOF is a dedicated transition from the
cumulative rendering in radiance fields to rendering with only the surface
points as the learned surface gets more and more accurate. In this way, GOF
combines the merits of two representations in a unified framework. In practice,
the training-time transition of start from radiance fields and march to
occupancy representations is achieved in GOF by gradually shrinking the
sampling region in its rendering process from the entire volume to a minimal
neighboring region around the surface. Through comprehensive experiments on
multiple datasets, we demonstrate that GOF can synthesize high-quality images
with 3D consistency and simultaneously learn compact and smooth object
surfaces. Code, models, and demo videos are available at
https://sheldontsui.github.io/projects/GOF
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation [25.20217335614512]
3D-aware image generative modeling aims to generate 3D-consistent images with explicitly controllable camera poses.
Recent works have shown promising results by training neural radiance field (NeRF) generators on unstructured 2D images.
arXiv Detail & Related papers (2021-12-16T13:25:49Z) - GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis [43.4859484191223]
We propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene.
By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone.
arXiv Detail & Related papers (2020-07-05T20:37:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.