BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation
- URL: http://arxiv.org/abs/2110.11728v1
- Date: Fri, 22 Oct 2021 12:00:27 GMT
- Title: BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation
- Authors: Mingcong Liu, Qiang Li, Zekui Qin, Guoxin Zhang, Pengfei Wan, Wen
Zheng
- Abstract summary: We propose BlendGAN for arbitrary stylized face generation.
We first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles.
In addition, a weighted blending module (WBM) is proposed to blend face and style representations implicitly and control the arbitrary stylization effect.
- Score: 9.370501805054344
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generative Adversarial Networks (GANs) have made a dramatic leap in
high-fidelity image synthesis and stylized face generation. Recently, a
layer-swapping mechanism has been developed to improve the stylization
performance. However, this method is incapable of fitting arbitrary styles in a
single model and requires hundreds of style-consistent training images for each
style. To address the above issues, we propose BlendGAN for arbitrary stylized
face generation by leveraging a flexible blending strategy and a generic
artistic dataset. Specifically, we first train a self-supervised style encoder
on the generic artistic dataset to extract the representations of arbitrary
styles. In addition, a weighted blending module (WBM) is proposed to blend face
and style representations implicitly and control the arbitrary stylization
effect. By doing so, BlendGAN can gracefully fit arbitrary styles in a unified
model while avoiding case-by-case preparation of style-consistent training
images. To this end, we also present a novel large-scale artistic face dataset
AAHQ. Extensive experiments demonstrate that BlendGAN outperforms
state-of-the-art methods in terms of visual quality and style diversity for
both latent-guided and reference-guided stylized face synthesis.
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face Synthesis [11.463969116010183]
ArtNeRF is a novel face stylization framework derived from 3D-aware GAN.
We propose an expressive generator to synthesize stylized faces and a triple-branch discriminator module to improve style consistency.
Experiments demonstrate that ArtNeRF is versatile in generating high-quality 3D-aware cartoon faces with arbitrary styles.
arXiv Detail & Related papers (2024-04-21T16:45:35Z) - Deformable One-shot Face Stylization via DINO Semantic Guidance [12.771707124161665]
This paper addresses the issue of one-shot face stylization, focusing on the simultaneous consideration of appearance and structure.
We explore deformation-aware face stylization that diverges from traditional single-image style reference, opting for a real-style image pair instead.
arXiv Detail & Related papers (2024-03-01T11:30:55Z) - HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced
Diffusion Models [84.12784265734238]
The goal of Arbitrary Style Transfer (AST) is injecting the artistic features of a style reference into a given image/video.
We propose HiCAST, which is capable of explicitly customizing the stylization results according to various source of semantic clues.
A novel learning objective is leveraged for video diffusion model training, which significantly improve cross-frame temporal consistency.
arXiv Detail & Related papers (2024-01-11T12:26:23Z) - High-Fidelity Face Swapping with Style Blending [16.024260677867076]
We propose an innovative end-to-end framework for high-fidelity face swapping.
First, we introduce a StyleGAN-based facial attributes encoder that extracts essential features from faces and inverts them into a latent style code.
Second, we introduce an attention-based style blending module to effectively transfer Face IDs from source to target.
arXiv Detail & Related papers (2023-12-17T23:22:37Z) - FISTNet: FusIon of STyle-path generative Networks for Facial Style Transfer [15.308837341075135]
StyleGAN methods have the tendency of overfitting that results in the introduction of artifacts in the facial images.
We propose a FusIon of STyles (FIST) network for facial images that leverages pre-trained multipath style transfer networks.
arXiv Detail & Related papers (2023-07-18T07:20:31Z) - Multi-Modal Face Stylization with a Generative Prior [27.79677001997915]
MMFS supports multi-modal face stylization by leveraging the strengths of StyleGAN.
We introduce a two-stage training strategy, where we train the encoder in the first stage to align the feature maps with StyleGAN.
In the second stage, the entire network is fine-tuned with artistic data for stylized face generation.
arXiv Detail & Related papers (2023-05-29T11:01:31Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.