Building 3D Generative Models from Minimal Data
- URL: http://arxiv.org/abs/2203.02554v1
- Date: Fri, 4 Mar 2022 20:10:50 GMT
- Title: Building 3D Generative Models from Minimal Data
- Authors: Skylar Sutherland, Bernhard Egger, Joshua Tenenbaum
- Abstract summary: We show that our approach can be used to perform face recognition using only a single 3D template (one scan total, not one per person)
We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images.
- Score: 3.472931603805115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for constructing generative models of 3D objects from a
single 3D mesh and improving them through unsupervised low-shot learning from
2D images. Our method produces a 3D morphable model that represents shape and
albedo in terms of Gaussian processes. Whereas previous approaches have
typically built 3D morphable models from multiple high-quality 3D scans through
principal component analysis, we build 3D morphable models from a single scan
or template. As we demonstrate in the face domain, these models can be used to
infer 3D reconstructions from 2D data (inverse graphics) or 3D data
(registration). Specifically, we show that our approach can be used to perform
face recognition using only a single 3D template (one scan total, not one per
person). We extend our model to a preliminary unsupervised learning framework
that enables the learning of the distribution of 3D faces using one 3D template
and a small number of 2D images. This approach could also provide a model for
the origins of face perception in human infants, who appear to start with an
innate face template and subsequently develop a flexible system for perceiving
the 3D structure of any novel face from experience with only 2D images of a
relatively small number of familiar faces.
Related papers
- DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data [50.164670363633704]
We present DIRECT-3D, a diffusion-based 3D generative model for creating high-quality 3D assets from text prompts.
Our model is directly trained on extensive noisy and unaligned in-the-wild' 3D assets.
We achieve state-of-the-art performance in both single-class generation and text-to-3D generation.
arXiv Detail & Related papers (2024-06-06T17:58:15Z) - Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior [57.986512832738704]
We present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
Specifically, we demonstrate that high-quality and diverse 3D geometry can be guaranteed by keypoints supervision through a sparse ray sampling approach.
These two decoupled designs effectively harness 3D information from reference objects to generate 3D objects while preserving the generation quality of the 2D diffusion model.
arXiv Detail & Related papers (2024-03-14T07:39:59Z) - Geometry aware 3D generation from in-the-wild images in ImageNet [18.157263188192434]
We propose a method for reconstructing 3D geometry from diverse and unstructured Imagenet dataset without camera pose information.
We use an efficient triplane representation to learn 3D models from 2D images and modify the architecture of the generator backbone based on StyleGAN2.
The trained generator can produce class-conditional 3D models as well as renderings from arbitrary viewpoints.
arXiv Detail & Related papers (2024-01-31T23:06:39Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Building 3D Morphable Models from a Single Scan [3.472931603805115]
We propose a method for constructing generative models of 3D objects from a single 3D mesh.
Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes.
We show that our approach can be used to perform face recognition using only a single 3D scan.
arXiv Detail & Related papers (2020-11-24T23:08:14Z) - Leveraging 2D Data to Learn Textured 3D Mesh Generation [33.32377849866736]
We present the first generative model of textured 3D meshes.
We train our model to explain a distribution of images by modelling each image as a 3D foreground object.
It learns to generate meshes that when rendered, produce images similar to those in its training set.
arXiv Detail & Related papers (2020-04-08T18:00:37Z) - FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed
Riggable 3D Face Prediction [39.95272819738226]
We present a novel algorithm that is able to predict elaborate riggable 3D face models from a single image input.
FaceScape dataset provides 18,760 textured 3D faces, captured from 938 subjects and each with 20 specific expressions.
arXiv Detail & Related papers (2020-03-31T07:11:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.