PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime
Characters
- URL: http://arxiv.org/abs/2303.14587v1
- Date: Sat, 25 Mar 2023 23:36:17 GMT
- Title: PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime
Characters
- Authors: Shuhong Chen, Kevin Zhang, Yichun Shi, Heng Wang, Yiheng Zhu, Guoxian
Song, Sizhe An, Janus Kristjansson, Xiao Yang, Matthias Zwicker
- Abstract summary: We propose PAniC-3D, a system to reconstruct stylized 3D character heads directly from illustrated (p)ortraits of (ani)me (c)haracters.
- Score: 29.107457721261387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose PAniC-3D, a system to reconstruct stylized 3D character heads
directly from illustrated (p)ortraits of (ani)me (c)haracters. Our anime-style
domain poses unique challenges to single-view reconstruction; compared to
natural images of human heads, character portrait illustrations have hair and
accessories with more complex and diverse geometry, and are shaded with
non-photorealistic contour lines. In addition, there is a lack of both 3D model
and portrait illustration data suitable to train and evaluate this ambiguous
stylized reconstruction task. Facing these challenges, our proposed PAniC-3D
architecture crosses the illustration-to-3D domain gap with a line-filling
model, and represents sophisticated geometries with a volumetric radiance
field. We train our system with two large new datasets (11.2k Vroid 3D models,
1k Vtuber portrait illustrations), and evaluate on a novel AnimeRecon benchmark
of illustration-to-3D pairs. PAniC-3D significantly outperforms baseline
methods, and provides data to establish the task of stylized reconstruction
from portrait illustrations.
Related papers
- Generating Animatable 3D Cartoon Faces from Single Portraits [51.15618892675337]
We present a novel framework to generate animatable 3D cartoon faces from a single portrait image.
We propose a two-stage reconstruction method to recover the 3D cartoon face with detailed texture.
Finally, we propose a semantic preserving face rigging method based on manually created templates and deformation transfer.
arXiv Detail & Related papers (2023-07-04T04:12:50Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer
Learning [80.67196184480754]
We propose a novel framework emphAgileGAN3D that can produce 3D artistically appealing portraits with detailed geometry.
New stylization can be obtained with just a few (around 20) unpaired 2D exemplars.
Our pipeline demonstrates strong capability in turning user photos into a diverse range of 3D artistic portraits.
arXiv Detail & Related papers (2023-03-24T23:04:20Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Structured 3D Features for Reconstructing Controllable Avatars [43.36074729431982]
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface.
We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation.
arXiv Detail & Related papers (2022-12-13T18:57:33Z) - Dr.3D: Adapting 3D GANs to Artistic Drawings [18.02433252623283]
Dr.3D is a novel adaptation approach that adapts an existing 3D GAN to artistic drawings.
Dr.3D is equipped with three novel components to handle the geometric ambiguity: a deformation-aware 3D synthesis network, an alternating adaptation of pose estimation and image synthesis, and geometric priors.
arXiv Detail & Related papers (2022-11-30T07:30:43Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - 3DCaricShop: A Dataset and A Baseline Method for Single-view 3D
Caricature Face Reconstruction [23.539931080533226]
3DCaricShop is the first large-scale 3D caricature dataset that contains 2000 high-quality diversified 3D caricatures manually crafted by professional artists.
3DCaricShop also provides rich annotations including a paired 2D caricature image, camera parameters and 3D facial landmarks.
We propose a novel view-collaborative graph convolution network (VCGCN) to extract key points from the implicit mesh for accurate alignment.
arXiv Detail & Related papers (2021-03-15T08:24:29Z) - 3D Shape Reconstruction from Free-Hand Sketches [42.15888734492648]
Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
arXiv Detail & Related papers (2020-06-17T07:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.