TailorMe: Self-Supervised Learning of an Anatomically Constrained
Volumetric Human Shape Model
- URL: http://arxiv.org/abs/2312.02173v1
- Date: Fri, 3 Nov 2023 07:42:19 GMT
- Title: TailorMe: Self-Supervised Learning of an Anatomically Constrained
Volumetric Human Shape Model
- Authors: Stephan Wenninger and Fabian Kemper and Ulrich Schwanecke and Mario
Botsch
- Abstract summary: Human shape spaces have been extensively studied, as they are a core element of human shape and pose inference tasks.
We create an anatomical template, consisting of skeleton bones and soft tissue, to the surface scans of the CAESAR database.
This data is then used to learn an anthropo constrained volumetric human shape model in a self-supervised fashion.
- Score: 4.474107938692397
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Human shape spaces have been extensively studied, as they are a core element
of human shape and pose inference tasks. Classic methods for creating a human
shape model register a surface template mesh to a database of 3D scans and use
dimensionality reduction techniques, such as Principal Component Analysis, to
learn a compact representation. While these shape models enable global shape
modifications by correlating anthropometric measurements with the learned
subspace, they only provide limited localized shape control. We instead
register a volumetric anatomical template, consisting of skeleton bones and
soft tissue, to the surface scans of the CAESAR database. We further enlarge
our training data to the full Cartesian product of all skeletons and all soft
tissues using physically plausible volumetric deformation transfer. This data
is then used to learn an anatomically constrained volumetric human shape model
in a self-supervised fashion. The resulting TailorMe model enables shape
sampling, localized shape manipulation, and fast inference from given surface
scans.
Related papers
- Generative 3D Cardiac Shape Modelling for In-Silico Trials [0.0]
We propose a deep learning method to model and generate synthetic aortic shapes.
The network is trained on a dataset of aortic root meshes reconstructed from CT images.
By sampling from the learned embedding vectors, we can generate novel shapes that resemble real patient anatomies.
arXiv Detail & Related papers (2024-09-24T12:59:18Z) - ShapeBoost: Boosting Human Shape Estimation with Part-Based
Parameterization and Clothing-Preserving Augmentation [58.50613393500561]
We propose ShapeBoost, a new human shape recovery framework.
It achieves pixel-level alignment even for rare body shapes and high accuracy for people wearing different types of clothes.
arXiv Detail & Related papers (2024-03-02T23:40:23Z) - ReshapeIT: Reliable Shape Interaction with Implicit Template for Anatomical Structure Reconstruction [59.971808117043366]
ReShapeIT represents an anatomical structure with an implicit template field shared within the same category.
It ensures the implicit template field generates valid templates by strengthening the constraint of the correspondence between the instance shape and the template shape.
A template Interaction Module is introduced to reconstruct unseen shapes by interacting the valid template shapes with the instance-wise latent codes.
arXiv Detail & Related papers (2023-12-11T07:09:32Z) - Mesh2SSM: From Surface Meshes to Statistical Shape Models of Anatomy [0.0]
We propose Mesh2SSM, a new approach that leverages unsupervised, permutation-invariant representation learning to estimate how to deform a template point cloud to subject-specific meshes.
Mesh2SSM can also learn a population-specific template, reducing any bias due to template selection.
arXiv Detail & Related papers (2023-05-13T00:03:59Z) - A Generative Shape Compositional Framework to Synthesise Populations of
Virtual Chimaeras [52.33206865588584]
We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets.
We build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures.
Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity.
arXiv Detail & Related papers (2022-10-04T13:36:52Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Identity-Disentangled Neural Deformation Model for Dynamic Meshes [8.826835863410109]
We learn a neural deformation model that disentangles identity-induced shape variations from pose-dependent deformations using implicit neural functions.
We propose two methods to integrate global pose alignment with our neural deformation model.
Our method also outperforms traditional skeleton-driven models in reconstructing surface details such as palm prints or tendons without limitations from a fixed template.
arXiv Detail & Related papers (2021-09-30T17:43:06Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.