Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian
- URL: http://arxiv.org/abs/2203.15235v1
- Date: Tue, 29 Mar 2022 04:57:18 GMT
- Title: Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian
- Authors: Jihyun Lee, Minhyuk Sung, Hyunjin Kim, Tae-Kyun Kim
- Abstract summary: We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
- Score: 58.704089101826774
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a framework that can deform an object in a 2D image as it exists
in 3D space. Most existing methods for 3D-aware image manipulation are limited
to (1) only changing the global scene information or depth, or (2) manipulating
an object of specific categories. In this paper, we present a 3D-aware image
deformation method with minimal restrictions on shape category and deformation
type. While our framework leverages 2D-to-3D reconstruction, we argue that
reconstruction is not sufficient for realistic deformations due to the
vulnerability to topological errors. Thus, we propose to take a supervised
learning-based approach to predict the shape Laplacian of the underlying volume
of a 3D reconstruction represented as a point cloud. Given the deformation
energy calculated using the predicted shape Laplacian and user-defined
deformation handles (e.g., keypoints), we obtain bounded biharmonic weights to
model plausible handle-based image deformation. In the experiments, we present
our results of deforming 2D character and clothed human images. We also
quantitatively show that our approach can produce more accurate deformation
weights compared to alternative methods (i.e., mesh reconstruction and point
cloud Laplacian methods).
Related papers
- 3D Surface Reconstruction in the Wild by Deforming Shape Priors from
Synthetic Data [24.97027425606138]
Reconstructing the underlying 3D surface of an object from a single image is a challenging problem.
We present a new method for joint category-specific 3D reconstruction and object pose estimation from a single image.
Our approach achieves state-of-the-art reconstruction performance across several real-world datasets.
arXiv Detail & Related papers (2023-02-24T20:37:27Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - To The Point: Correspondence-driven monocular 3D category reconstruction [39.811816510186475]
To The Point (TTP) is a method for reconstructing 3D objects from a single image using 2D to 3D correspondences learned from weak supervision.
We replace CNN-based regression of camera pose and non-rigid deformation and obtain substantially more accurate 3D reconstructions.
arXiv Detail & Related papers (2021-06-10T11:21:14Z) - Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D
Image GANs [156.1209884183522]
State-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold.
We present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only.
arXiv Detail & Related papers (2020-11-02T09:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.