3D Human Shape Style Transfer
- URL: http://arxiv.org/abs/2109.01587v1
- Date: Fri, 3 Sep 2021 15:51:30 GMT
- Title: 3D Human Shape Style Transfer
- Authors: Joao Regateiro and Edmond Boyer
- Abstract summary: We consider the problem of modifying/replacing the shape style of a real moving character with those of an arbitrary static real source character.
Traditional solutions follow a pose transfer strategy, from the moving character to the source character shape, that relies on skeletal pose parametrization.
In this paper, we explore an alternative approach that transfers the source shape style onto the moving character.
- Score: 21.73251261476412
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of modifying/replacing the shape style of a real
moving character with those of an arbitrary static real source character.
Traditional solutions follow a pose transfer strategy, from the moving
character to the source character shape, that relies on skeletal pose
parametrization. In this paper, we explore an alternative approach that
transfers the source shape style onto the moving character. The expected
benefit is to avoid the inherently difficult pose to shape conversion required
with skeletal parametrization applied on real characters. To this purpose, we
consider image style transfer techniques and investigate how to adapt them to
3D human shapes. Adaptive Instance Normalisation (AdaIN) and SPADE
architectures have been demonstrated to efficiently and accurately transfer the
style of an image onto another while preserving the original image structure.
Where AdaIN contributes with a module to perform style transfer through the
statistics of the subjects and SPADE contribute with a residual block
architecture to refine the quality of the style transfer. We demonstrate that
these approaches are extendable to the 3D shape domain by proposing a
convolutional neural network that applies the same principle of preserving the
shape structure (shape pose) while transferring the style of a new subject
shape. The generated results are supervised through a discriminator module to
evaluate the realism of the shape, whilst enforcing the decoder to synthesise
plausible shapes and improve the style transfer for unseen subjects. Our
experiments demonstrate an average of $\approx 56\%$ qualitative and
quantitative improvements over the baseline in shape transfer through
optimization-based and learning-based methods.
Related papers
- StyleDyRF: Zero-shot 4D Style Transfer for Dynamic Neural Radiance
Fields [21.55426133036809]
Existing efforts on 3D style transfer can effectively combine the visual features of style images and neural radiance fields (NeRF)
We introduce StyleDyRF, a method that represents the 4D feature space by deforming a canonical feature volume.
We show that our method not only renders 4D photorealistic style transfer results in a zero-shot manner but also outperforms existing methods in terms of visual quality and consistency.
arXiv Detail & Related papers (2024-03-13T07:42:21Z) - Zero-shot Pose Transfer for Unrigged Stylized 3D Characters [87.39039511208092]
We present a zero-shot approach that requires only the widely available deformed non-stylized avatars in training.
We leverage the power of local deformation, but without requiring explicit correspondence labels.
Our model generalizes to categories with scarce annotation, such as stylized quadrupeds.
arXiv Detail & Related papers (2023-05-31T21:39:02Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Skeleton-free Pose Transfer for Stylized 3D Characters [53.33996932633865]
We present the first method that automatically transfers poses between stylized 3D characters without skeletal rigging.
We propose a novel pose transfer network that predicts the character skinning weights and deformation transformations jointly to articulate the target character to match the desired pose.
Our method is trained in a semi-supervised manner absorbing all existing character data with paired/unpaired poses and stylized shapes.
arXiv Detail & Related papers (2022-07-28T20:05:57Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.