UNIF: United Neural Implicit Functions for Clothed Human Reconstruction
and Animation
- URL: http://arxiv.org/abs/2207.09835v1
- Date: Wed, 20 Jul 2022 11:41:29 GMT
- Title: UNIF: United Neural Implicit Functions for Clothed Human Reconstruction
and Animation
- Authors: Shenhan Qian, Jiale Xu, Ziwei Liu, Liqian Ma, Shenghua Gao
- Abstract summary: We propose a part-based method for clothed human reconstruction and animation with raw scans and skeletons as the input.
Our method learns to separate parts from body motions instead of part supervision, thus can be extended to clothed humans and other articulated objects.
- Score: 53.2018423391591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose united implicit functions (UNIF), a part-based method for clothed
human reconstruction and animation with raw scans and skeletons as the input.
Previous part-based methods for human reconstruction rely on ground-truth part
labels from SMPL and thus are limited to minimal-clothed humans. In contrast,
our method learns to separate parts from body motions instead of part
supervision, thus can be extended to clothed humans and other articulated
objects. Our Partition-from-Motion is achieved by a bone-centered
initialization, a bone limit loss, and a section normal loss that ensure stable
part division even when the training poses are limited. We also present a
minimal perimeter loss for SDF to suppress extra surfaces and part overlapping.
Another core of our method is an adjacent part seaming algorithm that produces
non-rigid deformations to maintain the connection between parts which
significantly relieves the part-based artifacts. Under this algorithm, we
further propose "Competing Parts", a method that defines blending weights by
the relative position of a point to bones instead of the absolute position,
avoiding the generalization problem of neural implicit functions with inverse
LBS (linear blend skinning). We demonstrate the effectiveness of our method by
clothed human body reconstruction and animation on the CAPE and the ClothSeq
datasets.
Related papers
- Divide and Fuse: Body Part Mesh Recovery from Partially Visible Human Images [57.479339658504685]
"Divide and Fuse" strategy reconstructs human body parts independently before fusing them.
Human Part Parametric Models (HPPM) independently reconstruct the mesh from a few shape and global-location parameters.
A specially designed fusion module seamlessly integrates the reconstructed parts, even when only a few are visible.
arXiv Detail & Related papers (2024-07-12T21:29:11Z) - AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimation [55.179287851188036]
We introduce a novel all-in-one-stage framework, AiOS, for expressive human pose and shape recovery without an additional human detection step.
We first employ a human token to probe a human location in the image and encode global features for each instance.
Then, we introduce a joint-related token to probe the human joint in the image and encoder a fine-grained local feature.
arXiv Detail & Related papers (2024-03-26T17:59:23Z) - Reconstructing 3D Human Pose from RGB-D Data with Occlusions [11.677978425905096]
We propose a new method to reconstruct the 3D human body from RGB-D images with occlusions.
To reconstruct a semantically and physically plausible human body, we propose to reduce the solution space based on scene information and prior knowledge.
We conducted experiments on the PROX dataset, and the results demonstrate that our method produces more accurate and plausible results compared with other methods.
arXiv Detail & Related papers (2023-10-02T14:16:13Z) - Body Part-Based Representation Learning for Occluded Person
Re-Identification [102.27216744301356]
Occluded person re-identification (ReID) is a person retrieval task which aims at matching occluded person images with holistic ones.
Part-based methods have been shown beneficial as they offer fine-grained information and are well suited to represent partially visible human bodies.
We propose BPBreID, a body part-based ReID model for solving the above issues.
arXiv Detail & Related papers (2022-11-07T16:48:41Z) - Unsupervised Pose-Aware Part Decomposition for 3D Articulated Objects [68.73163598790255]
We propose PPD (unsupervised Pose-aware Part Decomposition) to address a novel setting that explicitly targets man-made articulated objects with mechanical joints.
We show that category-common prior learning for both part shapes and poses facilitates the unsupervised learning of (1) part decomposition with non-primitive-based implicit representation, and (2) part pose as joint parameters under single-frame shape supervision.
arXiv Detail & Related papers (2021-10-08T23:53:56Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z) - AttentionAnatomy: A unified framework for whole-body organs at risk
segmentation using multiple partially annotated datasets [30.23917416966188]
Organs-at-risk (OAR) delineation in computed tomography (CT) is an important step in Radiation Therapy (RT) planning.
Our proposed end-to-end convolutional neural network model, called textbfAttentionAnatomy, can be jointly trained with three partially annotated datasets.
Experimental results of our proposed framework presented significant improvements in both Sorensen-Dice coefficient (DSC) and 95% Hausdorff distance.
arXiv Detail & Related papers (2020-01-13T18:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.