Styleverse: Towards Identity Stylization across Heterogeneous Domains
- URL: http://arxiv.org/abs/2203.00861v1
- Date: Wed, 2 Mar 2022 04:23:01 GMT
- Title: Styleverse: Towards Identity Stylization across Heterogeneous Domains
- Authors: Jia Li, Jie Cao, JunXian Duan, Ran He
- Abstract summary: We propose a new challenging task namely IDentity Stylization (IDS) across heterogeneous domains.
We use an effective heterogeneous-network-based framework $Styleverse$ that uses a single domain-aware generator.
$Styleverse achieves higher-fidelity identity stylization compared with other state-of-the-art methods.
- Score: 70.13327076710269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new challenging task namely IDentity Stylization (IDS) across
heterogeneous domains. IDS focuses on stylizing the content identity, rather
than completely swapping it using the reference identity. We use an effective
heterogeneous-network-based framework $Styleverse$ that uses a single
domain-aware generator to exploit the Metaverse of diverse heterogeneous faces,
based on the proposed dataset FS13 with limited data. FS13 means 13 kinds of
Face Styles considering diverse lighting conditions, art representations and
life dimensions. Previous similar tasks, \eg, image style transfer can handle
textural style transfer based on a reference image. This task usually ignores
the high structure-aware facial area and high-fidelity preservation of the
content. However, Styleverse intends to controllably create topology-aware
faces in the Parallel Style Universe, where the source facial identity is
adaptively styled via AdaIN guided by the domain-aware and reference-aware
style embeddings from heterogeneous pretrained models. We first establish the
IDS quantitative benchmark as well as the qualitative Styleverse matrix.
Extensive experiments demonstrate that Styleverse achieves higher-fidelity
identity stylization compared with other state-of-the-art methods.
Related papers
- Generalized Face Anti-spoofing via Finer Domain Partition and Disentangling Liveness-irrelevant Factors [23.325272595629773]
We redefine domains based on identities rather than datasets, aiming to disentangle liveness and identity attributes.
Our method achieves state-of-the-art performance under cross-dataset and limited source dataset scenarios.
arXiv Detail & Related papers (2024-07-11T07:39:58Z) - Deformable One-shot Face Stylization via DINO Semantic Guidance [12.771707124161665]
This paper addresses the issue of one-shot face stylization, focusing on the simultaneous consideration of appearance and structure.
We explore deformation-aware face stylization that diverges from traditional single-image style reference, opting for a real-style image pair instead.
arXiv Detail & Related papers (2024-03-01T11:30:55Z) - When StyleGAN Meets Stable Diffusion: a $\mathscr{W}_+$ Adapter for
Personalized Image Generation [60.305112612629465]
Text-to-image diffusion models have excelled in producing diverse, high-quality, and photo-realistic images.
We present a novel use of the extended StyleGAN embedding space $mathcalW_+$ to achieve enhanced identity preservation and disentanglement for diffusion models.
Our method adeptly generates personalized text-to-image outputs that are not only compatible with prompt descriptions but also amenable to common StyleGAN editing directions.
arXiv Detail & Related papers (2023-11-29T09:05:14Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleSwap: Style-Based Generator Empowers Robust Face Swapping [90.05775519962303]
We introduce a concise and effective framework named StyleSwap.
Our core idea is to leverage a style-based generator to empower high-fidelity and robust face swapping.
We identify that with only minimal modifications, a StyleGAN2 architecture can successfully handle the desired information from both source and target.
arXiv Detail & Related papers (2022-09-27T16:35:16Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - SAFIN: Arbitrary Style Transfer With Self-Attentive Factorized Instance
Normalization [71.85169368997738]
Artistic style transfer aims to transfer the style characteristics of one image onto another image while retaining its content.
Self-Attention-based approaches have tackled this issue with partial success but suffer from unwanted artifacts.
This paper aims to combine the best of both worlds: self-attention and normalization.
arXiv Detail & Related papers (2021-05-13T08:01:01Z) - Anisotropic Stroke Control for Multiple Artists Style Transfer [36.92721585146738]
Stroke Control Multi-Artist Style Transfer framework is developed.
Anisotropic Stroke Module (ASM) endows the network with the ability of adaptive semantic-consistency among various styles.
In contrast to the single-scale conditional discriminator, our discriminator is able to capture multi-scale texture clue.
arXiv Detail & Related papers (2020-10-16T05:32:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.