3DAvatarGAN: Bridging Domains for Personalized Editable Avatars
- URL: http://arxiv.org/abs/2301.02700v2
- Date: Sun, 26 Mar 2023 11:26:54 GMT
- Title: 3DAvatarGAN: Bridging Domains for Personalized Editable Avatars
- Authors: Rameen Abdal, Hsin-Ying Lee, Peihao Zhu, Menglei Chai, Aliaksandr
Siarohin, Peter Wonka, Sergey Tulyakov
- Abstract summary: 3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure.
We propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets.
We show a deformation-based technique for modeling exaggerated geometry of artistic domains, enabling -- as a byproduct -- personalized geometric editing.
- Score: 75.31960120109106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern 3D-GANs synthesize geometry and texture by training on large-scale
datasets with a consistent structure. Training such models on stylized,
artistic data, with often unknown, highly variable geometry, and camera
information has not yet been shown possible. Can we train a 3D GAN on such
artistic data, while maintaining multi-view consistency and texture quality? To
this end, we propose an adaptation framework, where the source domain is a
pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic
datasets. We then distill the knowledge from a 2D generator to the source 3D
generator. To do that, we first propose an optimization-based method to align
the distributions of camera parameters across domains. Second, we propose
regularizations necessary to learn high-quality texture, while avoiding
degenerate geometric solutions, such as flat shapes. Third, we show a
deformation-based technique for modeling exaggerated geometry of artistic
domains, enabling -- as a byproduct -- personalized geometric editing. Finally,
we propose a novel inversion method for 3D-GANs linking the latent spaces of
the source and the target domains. Our contributions -- for the first time --
allow for the generation, editing, and animation of personalized artistic 3D
avatars on artistic datasets.
Related papers
- Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - TADA! Text to Animatable Digital Avatars [57.52707683788961]
TADA takes textual descriptions and produces expressive 3D avatars with high-quality geometry and lifelike textures.
We derive an optimizable high-resolution body model from SMPL-X with 3D displacements and a texture map.
We render normals and RGB images of the generated character and exploit their latent embeddings in the SDS training process.
arXiv Detail & Related papers (2023-08-21T17:59:10Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.