Deep Deformable 3D Caricatures with Learned Shape Control
- URL: http://arxiv.org/abs/2207.14593v1
- Date: Fri, 29 Jul 2022 10:21:27 GMT
- Title: Deep Deformable 3D Caricatures with Learned Shape Control
- Authors: Yucheol Jung, Wonjong Jang, Soongjin Kim, Jiaolong Yang, Xin Tong,
Seungyong Lee
- Abstract summary: A 3D caricature is an exaggerated 3D depiction of a human face.
In this paper, we propose a caricature-based framework for building a deformable surface model.
We create variations of 3D surfaces by learning by learning the parameters of a deformable model.
- Score: 25.5491131982863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A 3D caricature is an exaggerated 3D depiction of a human face. The goal of
this paper is to model the variations of 3D caricatures in a compact parameter
space so that we can provide a useful data-driven toolkit for handling 3D
caricature deformations. To achieve the goal, we propose an MLP-based framework
for building a deformable surface model, which takes a latent code and produces
a 3D surface. In the framework, a SIREN MLP models a function that takes a 3D
position on a fixed template surface and returns a 3D displacement vector for
the input position. We create variations of 3D surfaces by learning a
hypernetwork that takes a latent code and produces the parameters of the MLP.
Once learned, our deformable model provides a nice editing space for 3D
caricatures, supporting label-based semantic editing and point-handle-based
deformation, both of which produce highly exaggerated and natural 3D caricature
shapes. We also demonstrate other applications of our deformable model, such as
automatic 3D caricature creation.
Related papers
- 3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis [49.352765055181436]
We propose a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis.
Our solution achieves 3D geometry-aware deformation modeling, which enables improved dynamic view synthesis and 3D dynamic reconstruction.
arXiv Detail & Related papers (2024-04-09T12:47:30Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Structured 3D Features for Reconstructing Controllable Avatars [43.36074729431982]
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface.
We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation.
arXiv Detail & Related papers (2022-12-13T18:57:33Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.