AniPixel: Towards Animatable Pixel-Aligned Human Avatar
- URL: http://arxiv.org/abs/2302.03397v2
- Date: Tue, 17 Oct 2023 16:29:12 GMT
- Title: AniPixel: Towards Animatable Pixel-Aligned Human Avatar
- Authors: Jinlong Fan and Jing Zhang and Zhi Hou and Dacheng Tao
- Abstract summary: AniPixel is a novel animatable and generalizable human avatar reconstruction method.
We propose a neural skinning field based on skeleton-driven deformation to establish the target-to-canonical and canonical-to-observation correspondences.
Experiments show that AniPixel renders comparable novel views while delivering better novel pose animation results than state-of-the-art methods.
- Score: 65.7175527782209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although human reconstruction typically results in human-specific avatars,
recent 3D scene reconstruction techniques utilizing pixel-aligned features show
promise in generalizing to new scenes. Applying these techniques to human
avatar reconstruction can result in a volumetric avatar with generalizability
but limited animatability due to rendering only being possible for static
representations. In this paper, we propose AniPixel, a novel animatable and
generalizable human avatar reconstruction method that leverages pixel-aligned
features for body geometry prediction and RGB color blending. Technically, to
align the canonical space with the target space and the observation space, we
propose a bidirectional neural skinning field based on skeleton-driven
deformation to establish the target-to-canonical and canonical-to-observation
correspondences. Then, we disentangle the canonical body geometry into a
normalized neutral-sized body and a subject-specific residual for better
generalizability. As the geometry and appearance are closely related, we
introduce pixel-aligned features to facilitate the body geometry prediction and
detailed surface normals to reinforce the RGB color blending. We also devise a
pose-dependent and view direction-related shading module to represent the local
illumination variance. Experiments show that AniPixel renders comparable novel
views while delivering better novel pose animation results than
state-of-the-art methods.
Related papers
- UVA: Towards Unified Volumetric Avatar for View Synthesis, Pose
rendering, Geometry and Texture Editing [83.0396740127043]
We propose a new approach named textbfUnified textbfVolumetric textbfAvatar (textbfUVA) that enables local editing of both geometry and texture.
UVA transforms each observation point to a canonical space using a skinning motion field and represents geometry and texture in separate neural fields.
Experiments on multiple human avatars demonstrate that our UVA achieves novel view synthesis and novel pose rendering.
arXiv Detail & Related papers (2023-04-14T07:39:49Z) - One-shot Implicit Animatable Avatars with Model-based Priors [31.385051428938585]
ELICIT is a novel method for learning human-specific neural radiance fields from a single image.
ELICIT has outperformed strong baseline methods of avatar creation when only a single image is available.
arXiv Detail & Related papers (2022-12-05T18:24:06Z) - ARAH: Animatable Volume Rendering of Articulated Human SDFs [37.48271522183636]
We propose a model to create animatable clothed human avatars with detailed geometry that generalize well to out-of-distribution poses.
Our algorithm enables efficient point sampling and accurate point canonicalization while generalizing well to unseen poses.
Our method achieves state-of-the-art performance on geometry and appearance reconstruction while creating animatable avatars.
arXiv Detail & Related papers (2022-10-18T17:56:59Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D
Video Sequence [60.46092534331516]
We present a novel method to learn Personalized Implicit Neural Avatars (PINA) from a short RGB-D sequence.
PINA does not require complete scans, nor does it require a prior learned from large datasets of clothed humans.
We propose a method to learn the shape and non-rigid deformations via a pose-conditioned implicit surface and a deformation field.
arXiv Detail & Related papers (2022-03-03T15:04:55Z) - I M Avatar: Implicit Morphable Head Avatars from Videos [68.13409777995392]
We propose IMavatar, a novel method for learning implicit head avatars from monocular videos.
Inspired by the fine-grained control mechanisms afforded by conventional 3DMMs, we represent the expression- and pose-related deformations via learned blendshapes and skinning fields.
We show quantitatively and qualitatively that our method improves geometry and covers a more complete expression space compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-14T15:30:32Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.