MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing
- URL: http://arxiv.org/abs/2010.16417v1
- Date: Fri, 30 Oct 2020 17:59:10 GMT
- Title: MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing
- Authors: Zhentao Tan and Menglei Chai and Dongdong Chen and Jing Liao and Qi
Chu and Lu Yuan and Sergey Tulyakov and Nenghai Yu
- Abstract summary: MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
- Score: 122.82964863607938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent success of face image generation with GANs, conditional
hair editing remains challenging due to the under-explored complexity of its
geometry and appearance. In this paper, we present MichiGAN
(Multi-Input-Conditioned Hair Image GAN), a novel conditional image generation
method for interactive portrait hair manipulation. To provide user control over
every major hair visual factor, we explicitly disentangle hair into four
orthogonal attributes, including shape, structure, appearance, and background.
For each of them, we design a corresponding condition module to represent,
process, and convert user inputs, and modulate the image generation pipeline in
ways that respect the natures of different visual attributes. All these
condition modules are integrated with the backbone generator to form the final
end-to-end network, which allows fully-conditioned hair generation from
multiple user inputs. Upon it, we also build an interactive portrait hair
editing system that enables straightforward manipulation of hair by projecting
intuitive and high-level user inputs such as painted masks, guiding strokes, or
reference photos to well-defined condition representations. Through extensive
experiments and evaluations, we demonstrate the superiority of our method
regarding both result quality and user controllability. The code is available
at https://github.com/tzt101/MichiGAN.
Related papers
- MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing [34.31657241047574]
We propose a Hybrid Mesh-Gaussian Head Avatar (MeGA) that models different head components with more suitable representations.
MeGA generates higher-fidelity renderings for the whole head and naturally supports more downstream tasks.
Experiments on the NeRSemble dataset demonstrate the effectiveness of our designs.
arXiv Detail & Related papers (2024-04-29T18:10:12Z) - HairCLIPv2: Unifying Hair Editing via Proxy Feature Blending [94.39417893972262]
HairCLIP is the first work that enables hair editing based on text descriptions or reference images.
In this paper, we propose HairCLIPv2, aiming to support all the aforementioned interactions with one unified framework.
The key idea is to convert all the hair editing tasks into hair transfer tasks, with editing conditions converted into different proxies accordingly.
arXiv Detail & Related papers (2023-10-16T17:59:58Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - Neural Strands: Learning Hair Geometry and Appearance from Multi-View
Images [40.91569888920849]
We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs.
The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects.
arXiv Detail & Related papers (2022-07-28T13:08:46Z) - HairCLIP: Design Your Hair by Text and Reference Image [100.85116679883724]
This paper proposes a new hair editing interaction mode, which enables manipulating hair attributes individually or jointly.
We encode the image and text conditions in a shared embedding space and propose a unified hair editing framework.
With the carefully designed network structures and loss functions, our framework can perform high-quality hair editing.
arXiv Detail & Related papers (2021-12-09T18:59:58Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - Recapture as You Want [140.6691726604726]
We present a portrait recapture method enabling users to easily edit their portrait to desired posture/view, body figure and clothing style.
We decompose the editing procedure into semantic-aware geometric and appearance transformation.
In appearance transformation, we design two novel modules, Semantic-aware Attentive Transfer (SAT) and Layout Graph Reasoning (LGR)
arXiv Detail & Related papers (2020-06-02T07:43:53Z) - Intuitive, Interactive Beard and Hair Synthesis with Generative Models [38.93415643177721]
We present an interactive approach to synthesizing realistic variations in facial hair in images.
We employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second.
We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle.
arXiv Detail & Related papers (2020-04-15T01:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.