A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks
- URL: http://arxiv.org/abs/2207.12687v1
- Date: Tue, 26 Jul 2022 07:00:50 GMT
- Title: A Kendall Shape Space Approach to 3D Shape Estimation from 2D Landmarks
- Authors: Martha Paskin and Daniel Baum and Mason N. Dean and Christoph von
Tycowicz
- Abstract summary: We present a new approach based on Kendall's shape space to reconstruct 3D shapes from single monocular 2D images.
The work is motivated by an application to study the feeding behavior of the basking shark.
- Score: 0.5161531917413708
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: 3D shapes provide substantially more information than 2D images. However, the
acquisition of 3D shapes is sometimes very difficult or even impossible in
comparison with acquiring 2D images, making it necessary to derive the 3D shape
from 2D images. Although this is, in general, a mathematically ill-posed
problem, it might be solved by constraining the problem formulation using prior
information. Here, we present a new approach based on Kendall's shape space to
reconstruct 3D shapes from single monocular 2D images. The work is motivated by
an application to study the feeding behavior of the basking shark, an
endangered species whose massive size and mobility render 3D shape data nearly
impossible to obtain, hampering understanding of their feeding behaviors and
ecology. 2D images of these animals in feeding position, however, are readily
available. We compare our approach with state-of-the-art shape-based
approaches, both on human stick models and on shark head skeletons. Using a
small set of training shapes, we show that the Kendall shape space approach is
substantially more robust than previous methods and results in plausible
shapes. This is essential for the motivating application in which specimens are
rare and therefore only few training shapes are available.
Related papers
- AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D
Image GANs [156.1209884183522]
State-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold.
We present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only.
arXiv Detail & Related papers (2020-11-02T09:38:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.