Enhanced 3DMM Attribute Control via Synthetic Dataset Creation Pipeline
- URL: http://arxiv.org/abs/2011.12833v2
- Date: Fri, 11 Dec 2020 04:47:46 GMT
- Title: Enhanced 3DMM Attribute Control via Synthetic Dataset Creation Pipeline
- Authors: Wonwoong Cho, Inyeop Lee, David Inouye
- Abstract summary: We develop a novel pipeline for generating paired 3D faces by harnessing the power of GANs.
We then propose an enhanced non-linear 3D conditional attribute controller that increases the precision and diversity of 3D attribute control.
- Score: 2.4309139330334846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While facial attribute manipulation of 2D images via Generative Adversarial
Networks (GANs) has become common in computer vision and graphics due to its
many practical uses, research on 3D attribute manipulation is relatively
undeveloped. Existing 3D attribute manipulation methods are limited because the
same semantic changes are applied to every 3D face. The key challenge for
developing better 3D attribute control methods is the lack of paired training
data in which one attribute is changed while other attributes are held fixed --
e.g., a pair of 3D faces where one is male and the other is female but all
other attributes, such as race and expression, are the same. To overcome this
challenge, we design a novel pipeline for generating paired 3D faces by
harnessing the power of GANs. On top of this pipeline, we then propose an
enhanced non-linear 3D conditional attribute controller that increases the
precision and diversity of 3D attribute control compared to existing methods.
We demonstrate the validity of our dataset creation pipeline and the superior
performance of our conditional attribute controller via quantitative and
qualitative evaluations.
Related papers
- Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - G3PT: Unleash the power of Autoregressive Modeling in 3D Generation via Cross-scale Querying Transformer [4.221298212125194]
We introduce G3PT, a scalable coarse-to-fine 3D generative model utilizing a cross-scale querying transformer.
Cross-scale querying transformer connects tokens globally across different levels of detail without requiring an ordered sequence.
Experiments demonstrate that G3PT achieves superior generation quality and generalization ability compared to previous 3D generation methods.
arXiv Detail & Related papers (2024-09-10T08:27:19Z) - Deep Geometric Moments Promote Shape Consistency in Text-to-3D Generation [27.43973967994717]
MT3D is a text-to-3D generative model that leverages a high-fidelity 3D object to overcome viewpoint bias.
By incorporating geometric details from a 3D asset, MT3D enables the creation of diverse and geometrically consistent objects.
arXiv Detail & Related papers (2024-08-12T06:25:44Z) - Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior [57.986512832738704]
We present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
Specifically, we demonstrate that high-quality and diverse 3D geometry can be guaranteed by keypoints supervision through a sparse ray sampling approach.
These two decoupled designs effectively harness 3D information from reference objects to generate 3D objects while preserving the generation quality of the 2D diffusion model.
arXiv Detail & Related papers (2024-03-14T07:39:59Z) - AttriHuman-3D: Editable 3D Human Avatar Generation with Attribute
Decomposition and Indexing [79.38471599977011]
We propose AttriHuman-3D, an editable 3D human generation model.
It generates all attributes in an overall attribute space with six feature planes, which are decomposed and manipulated with different attribute indexes.
Our model provides a strong disentanglement between different attributes, allows fine-grained image editing and generates high-quality 3D human avatars.
arXiv Detail & Related papers (2023-12-03T03:20:10Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Text and Image Guided 3D Avatar Generation and Manipulation [0.0]
We propose a novel 3D manipulation method that can manipulate both the shape and texture of the model using text or image-based prompts such as 'a young face' or 'a surprised face'
Our method requires only 5 minutes per manipulation, and we demonstrate the effectiveness of our approach with extensive results and comparisons.
arXiv Detail & Related papers (2022-02-12T14:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.