Recapture as You Want
- URL: http://arxiv.org/abs/2006.01435v1
- Date: Tue, 2 Jun 2020 07:43:53 GMT
- Title: Recapture as You Want
- Authors: Chen Gao, Si Liu, Ran He, Shuicheng Yan, Bo Li
- Abstract summary: We present a portrait recapture method enabling users to easily edit their portrait to desired posture/view, body figure and clothing style.
We decompose the editing procedure into semantic-aware geometric and appearance transformation.
In appearance transformation, we design two novel modules, Semantic-aware Attentive Transfer (SAT) and Layout Graph Reasoning (LGR)
- Score: 140.6691726604726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing prevalence and more powerful camera systems of mobile
devices, people can conveniently take photos in their daily life, which
naturally brings the demand for more intelligent photo post-processing
techniques, especially on those portrait photos. In this paper, we present a
portrait recapture method enabling users to easily edit their portrait to
desired posture/view, body figure and clothing style, which are very
challenging to achieve since it requires to simultaneously perform non-rigid
deformation of human body, invisible body-parts reasoning and semantic-aware
editing. We decompose the editing procedure into semantic-aware geometric and
appearance transformation. In geometric transformation, a semantic layout map
is generated that meets user demands to represent part-level spatial
constraints and further guides the semantic-aware appearance transformation. In
appearance transformation, we design two novel modules, Semantic-aware
Attentive Transfer (SAT) and Layout Graph Reasoning (LGR), to conduct
intra-part transfer and inter-part reasoning, respectively. SAT module produces
each human part by paying attention to the semantically consistent regions in
the source portrait. It effectively addresses the non-rigid deformation issue
and well preserves the intrinsic structure/appearance with rich texture
details. LGR module utilizes body skeleton knowledge to construct a layout
graph that connects all relevant part features, where graph reasoning mechanism
is used to propagate information among part nodes to mine their relations. In
this way, LGR module infers invisible body parts and guarantees global
coherence among all the parts. Extensive experiments on DeepFashion,
Market-1501 and in-the-wild photos demonstrate the effectiveness and
superiority of our approach. Video demo is at:
\url{https://youtu.be/vTyq9HL6jgw}.
Related papers
- Pose Guided Human Image Synthesis with Partially Decoupled GAN [25.800174118151638]
Pose Guided Human Image Synthesis (PGHIS) is a challenging task of transforming a human image from the reference pose to a target pose.
We propose a method by decoupling the human body into several parts to guide the synthesis of a realistic image of the person.
In addition, we design a multi-head attention-based module for PGHIS.
arXiv Detail & Related papers (2022-10-07T15:31:37Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with
Conditional StyleGAN [88.62422914645066]
We present an algorithm for re-rendering a person from a single image under arbitrary poses.
Existing methods often have difficulties in hallucinating occluded contents photo-realistically while preserving the identity and fine details in the source image.
We show that our method compares favorably against the state-of-the-art algorithms in both quantitative evaluation and visual comparison.
arXiv Detail & Related papers (2021-09-13T17:59:33Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - PISE: Person Image Synthesis and Editing with Decoupled GAN [64.70360318367943]
We propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing.
For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing.
To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization.
arXiv Detail & Related papers (2021-03-06T04:32:06Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z) - Structure-aware Person Image Generation with Pose Decomposition and
Semantic Correlation [29.727033198797518]
We propose a structure-aware flow based method for high-quality person image generation.
We decompose the human body into different semantic parts and apply different networks to predict the flow fields for these parts separately.
Our method can generate high-quality results under large pose discrepancy and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
arXiv Detail & Related papers (2021-02-05T03:07:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.