LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer
- URL: http://arxiv.org/abs/2405.07319v1
- Date: Sun, 12 May 2024 16:11:28 GMT
- Title: LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer
- Authors: Siyou Lin, Zhe Li, Zhaoqi Su, Zerong Zheng, Hongwen Zhang, Yebin Liu,
- Abstract summary: We present Layered Gaussian Avatars (LayGA), a new representation that formulates body and clothing as two separate layers.
Our representation is built upon the Gaussian map-based avatar for its excellent representation power of garment details.
In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces.
In the multi-layer fitting stage, we train two separate models to represent body and clothing and utilize the reconstructed clothing geometries as 3D supervision.
- Score: 40.372917698238204
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Animatable clothing transfer, aiming at dressing and animating garments across characters, is a challenging problem. Most human avatar works entangle the representations of the human body and clothing together, which leads to difficulties for virtual try-on across identities. What's worse, the entangled representations usually fail to exactly track the sliding motion of garments. To overcome these limitations, we present Layered Gaussian Avatars (LayGA), a new representation that formulates body and clothing as two separate layers for photorealistic animatable clothing transfer from multi-view videos. Our representation is built upon the Gaussian map-based avatar for its excellent representation power of garment details. However, the Gaussian map produces unstructured 3D Gaussians distributed around the actual surface. The absence of a smooth explicit surface raises challenges in accurate garment tracking and collision handling between body and garments. Therefore, we propose two-stage training involving single-layer reconstruction and multi-layer fitting. In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces and simultaneously obtain the segmentation between body and clothing. Next, in the multi-layer fitting stage, we train two separate models to represent body and clothing and utilize the reconstructed clothing geometries as 3D supervision for more accurate garment tracking. Furthermore, we propose geometry and rendering layers for both high-quality geometric reconstruction and high-fidelity rendering. Overall, the proposed LayGA realizes photorealistic animations and virtual try-on, and outperforms other baseline methods. Our project page is https://jsnln.github.io/layga/index.html.
Related papers
- DAGSM: Disentangled Avatar Generation with GS-enhanced Mesh [102.84518904896737]
DAGSM is a novel pipeline that generates disentangled human bodies and garments from the given text prompts.
We first create the unclothed body, followed by a sequence of individual cloth generation based on the body.
Experiments have demonstrated that DAGSM generates high-quality disentangled avatars, supports clothing replacement and realistic animation, and outperforms the baselines in visual quality.
arXiv Detail & Related papers (2024-11-20T07:00:48Z) - Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - LAGA: Layered 3D Avatar Generation and Customization via Gaussian Splatting [18.613001290226773]
LAyered Gaussian Avatar (LAGA) is a framework enabling the creation of high-fidelity decomposable avatars with diverse garments.
By decoupling garments from avatar, our framework empowers users to conviniently edit avatars at the garment level.
Our approach surpasses existing methods in the generation of 3D clothed humans.
arXiv Detail & Related papers (2024-05-21T10:24:06Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - Garment4D: Garment Reconstruction from Point Cloud Sequences [12.86951061306046]
Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses.
Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities.
We propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction.
arXiv Detail & Related papers (2021-12-08T08:15:20Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Explicit Clothing Modeling for an Animatable Full-Body Avatar [21.451440299450592]
We build an animatable clothed body avatar with an explicit representation of the clothing on the upper body from multi-view captured videos.
To learn the interaction between the body dynamics and clothing states, we use a temporal convolution network to predict the clothing latent code.
We show photorealistic animation output for three different actors, and demonstrate the advantage of our clothed-body avatars over single-layer avatars.
arXiv Detail & Related papers (2021-06-28T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.