Dr.3D: Adapting 3D GANs to Artistic Drawings
- URL: http://arxiv.org/abs/2211.16798v1
- Date: Wed, 30 Nov 2022 07:30:43 GMT
- Title: Dr.3D: Adapting 3D GANs to Artistic Drawings
- Authors: Wonjoon Jin, Nuri Ryu, Geonung Kim, Seung-Hwan Baek, Sunghyun Cho
- Abstract summary: Dr.3D is a novel adaptation approach that adapts an existing 3D GAN to artistic drawings.
Dr.3D is equipped with three novel components to handle the geometric ambiguity: a deformation-aware 3D synthesis network, an alternating adaptation of pose estimation and image synthesis, and geometric priors.
- Score: 18.02433252623283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While 3D GANs have recently demonstrated the high-quality synthesis of
multi-view consistent images and 3D shapes, they are mainly restricted to
photo-realistic human portraits. This paper aims to extend 3D GANs to a
different, but meaningful visual form: artistic portrait drawings. However,
extending existing 3D GANs to drawings is challenging due to the inevitable
geometric ambiguity present in drawings. To tackle this, we present Dr.3D, a
novel adaptation approach that adapts an existing 3D GAN to artistic drawings.
Dr.3D is equipped with three novel components to handle the geometric
ambiguity: a deformation-aware 3D synthesis network, an alternating adaptation
of pose estimation and image synthesis, and geometric priors. Experiments show
that our approach can successfully adapt 3D GANs to drawings and enable
multi-view consistent semantic editing of drawings.
Related papers
- Toon3D: Seeing Cartoons from a New Perspective [52.85312338932685]
We focus our analysis on hand-drawn images from cartoons and anime.
Many cartoons are created by artists without a 3D rendering engine, which means that any new image of a scene is hand-drawn.
We correct for 2D drawing inconsistencies to recover a plausible 3D structure such that the newly warped drawings are consistent with each other.
arXiv Detail & Related papers (2024-05-16T17:59:51Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and
Consistent 3D Generation [66.46683554587352]
We present GeoDream, a novel method that incorporates explicit generalized 3D priors with 2D diffusion priors.
Specifically, we first utilize a multi-view diffusion model to generate posed images and then construct cost volume from the predicted image.
We further propose to harness 3D geometric priors to unlock the great potential of 3D awareness in 2D diffusion priors via a disentangled design.
arXiv Detail & Related papers (2023-11-29T15:48:48Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime
Characters [29.107457721261387]
We propose PAniC-3D, a system to reconstruct stylized 3D character heads directly from illustrated (p)ortraits of (ani)me (c)haracters.
arXiv Detail & Related papers (2023-03-25T23:36:17Z) - 3DAvatarGAN: Bridging Domains for Personalized Editable Avatars [75.31960120109106]
3D-GANs synthesize geometry and texture by training on large-scale datasets with a consistent structure.
We propose an adaptation framework, where the source domain is a pre-trained 3D-GAN, while the target domain is a 2D-GAN trained on artistic datasets.
We show a deformation-based technique for modeling exaggerated geometry of artistic domains, enabling -- as a byproduct -- personalized geometric editing.
arXiv Detail & Related papers (2023-01-06T19:58:47Z) - 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping [37.14866512377012]
3DHumanGAN is a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans.
We propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network.
arXiv Detail & Related papers (2022-12-14T17:59:03Z) - XDGAN: Multi-Modal 3D Shape Generation in 2D Space [60.46777591995821]
We propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space.
The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing.
We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
arXiv Detail & Related papers (2022-10-06T15:54:01Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.