ASM: Adaptive Skinning Model for High-Quality 3D Face Modeling
- URL: http://arxiv.org/abs/2304.09423v3
- Date: Sun, 8 Oct 2023 09:16:28 GMT
- Title: ASM: Adaptive Skinning Model for High-Quality 3D Face Modeling
- Authors: Kai Yang, Hong Shang, Tianyang Shi, Xinghan Chen, Jingkai Zhou,
Zhongqian Sun and Wei Yang
- Abstract summary: We argue that reconstruction with multi-view uncalibrated images demands a new model with stronger capacity.
We propose Adaptive Skinning Model (ASM), which redefines the skinning model with more compact and fully tunable parameters.
Our work opens up new research direction for parametric face model and facilitates future research on multi-view reconstruction.
- Score: 11.885382595302751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research fields of parametric face model and 3D face reconstruction have
been extensively studied. However, a critical question remains unanswered: how
to tailor the face model for specific reconstruction settings. We argue that
reconstruction with multi-view uncalibrated images demands a new model with
stronger capacity. Our study shifts attention from data-dependent 3D Morphable
Models (3DMM) to an understudied human-designed skinning model. We propose
Adaptive Skinning Model (ASM), which redefines the skinning model with more
compact and fully tunable parameters. With extensive experiments, we
demonstrate that ASM achieves significantly improved capacity than 3DMM, with
the additional advantage of model size and easy implementation for new
topology. We achieve state-of-the-art performance with ASM for multi-view
reconstruction on the Florence MICC Coop benchmark. Our quantitative analysis
demonstrates the importance of a high-capacity model for fully exploiting
abundant information from multi-view input in reconstruction. Furthermore, our
model with physical-semantic parameters can be directly utilized for real-world
applications, such as in-game avatar creation. As a result, our work opens up
new research direction for parametric face model and facilitates future
research on multi-view reconstruction.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction [9.231326291897817]
We introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses.
arXiv Detail & Related papers (2024-04-22T04:22:30Z) - VRMM: A Volumetric Relightable Morphable Head Model [55.21098471673929]
We introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.
Our framework efficiently disentangles and encodes latent spaces of identity, expression, and lighting into low-dimensional representations.
We demonstrate the versatility and effectiveness of VRMM through various applications like avatar generation, facial reconstruction, and animation.
arXiv Detail & Related papers (2024-02-06T15:55:46Z) - ZhiJian: A Unifying and Rapidly Deployable Toolbox for Pre-trained Model
Reuse [59.500060790983994]
This paper introduces ZhiJian, a comprehensive and user-friendly toolbox for model reuse, utilizing the PyTorch backend.
ZhiJian presents a novel paradigm that unifies diverse perspectives on model reuse, encompassing target architecture construction with PTM, tuning target model with PTM, and PTM-based inference.
arXiv Detail & Related papers (2023-08-17T19:12:13Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.