DRB-GAN: A Dynamic ResBlock Generative Adversarial Network for Artistic
Style Transfer
- URL: http://arxiv.org/abs/2108.07379v2
- Date: Thu, 19 Aug 2021 01:44:00 GMT
- Title: DRB-GAN: A Dynamic ResBlock Generative Adversarial Network for Artistic
Style Transfer
- Authors: Wenju Xu and Chengjiang Long and Ruisheng Wang and Guanghui Wang
- Abstract summary: The paper proposes a Dynamic ResBlock Generative Adversarial Network (DRB-GAN) for artistic style transfer.
Our proposed DRB-GAN outperforms state-of-the-art methods and exhibits its superior performance in terms of visual quality and efficiency.
- Score: 29.20616177457981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper proposes a Dynamic ResBlock Generative Adversarial Network
(DRB-GAN) for artistic style transfer. The style code is modeled as the shared
parameters for Dynamic ResBlocks connecting both the style encoding network and
the style transfer network. In the style encoding network, a style class-aware
attention mechanism is used to attend the style feature representation for
generating the style codes. In the style transfer network, multiple Dynamic
ResBlocks are designed to integrate the style code and the extracted CNN
semantic feature and then feed into the spatial window Layer-Instance
Normalization (SW-LIN) decoder, which enables high-quality synthetic images
with artistic style transfer. Moreover, the style collection conditional
discriminator is designed to equip our DRB-GAN model with abilities for both
arbitrary style transfer and collection style transfer during the training
stage. No matter for arbitrary style transfer or collection style transfer,
extensive experiments strongly demonstrate that our proposed DRB-GAN
outperforms state-of-the-art methods and exhibits its superior performance in
terms of visual quality and efficiency. Our source code is available at
\color{magenta}{\url{https://github.com/xuwenju123/DRB-GAN}}.
Related papers
- VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations [21.963237916505175]
We propose a novel method for effective disentanglement of the style and content in human motion data.<n>Our approach is guided by the insight that content corresponds to coarse motion attributes while style captures the finer, expressive details.<n>We harness this disentangled representation using our simple and effective inference-time technique Quantized Code Swapping.
arXiv Detail & Related papers (2026-02-02T16:58:17Z) - CDST: Color Disentangled Style Transfer for Universal Style Reference Customization [5.5422947587598035]
We introduce Color Disentangled Style Transfer (CDST), a novel and efficient two-stream style transfer training paradigm.<n>With one same model, CDST unlocks universal style transfer capabilities in a tuning-free manner during inference.
arXiv Detail & Related papers (2025-05-22T18:44:48Z) - Pluggable Style Representation Learning for Multi-Style Transfer [41.09041735653436]
We develop a style transfer framework by decoupling the style modeling and transferring.
For style modeling, we propose a style representation learning scheme to encode the style information into a compact representation.
For style transferring, we develop a style-aware multi-style transfer network (SaMST) to adapt to diverse styles using pluggable style representations.
arXiv Detail & Related papers (2025-03-26T09:44:40Z) - DiffuseST: Unleashing the Capability of the Diffusion Model for Style Transfer [13.588643982359413]
Style transfer aims to fuse the artistic representation of a style image with the structural information of a content image.
Existing methods train specific networks or utilize pre-trained models to learn content and style features.
We propose a novel and training-free approach for style transfer, combining textual embedding with spatial features.
arXiv Detail & Related papers (2024-10-19T06:42:43Z) - ArtWeaver: Advanced Dynamic Style Integration via Diffusion Model [73.95608242322949]
Stylized Text-to-Image Generation (STIG) aims to generate images from text prompts and style reference images.
We present ArtWeaver, a novel framework that leverages pretrained Stable Diffusion to address challenges such as misinterpreted styles and inconsistent semantics.
arXiv Detail & Related papers (2024-05-24T07:19:40Z) - FreeStyle: Free Lunch for Text-guided Style Transfer using Diffusion Models [11.401299303276016]
We introduce FreeStyle, an innovative style transfer method built upon a pre-trained large diffusion model.
Our method enables style transfer only through a text description of the desired style, eliminating the necessity of style images.
Our experimental results demonstrate high-quality synthesis and fidelity of our method across various content images and style text prompts.
arXiv Detail & Related papers (2024-01-28T12:00:31Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - HyperNST: Hyper-Networks for Neural Style Transfer [19.71337532582559]
We present a technique for the artistic stylization of images, based on Hyper-networks and the StyleGAN2 architecture.
Our contribution is a novel method for inducing style transfer parameterized by a metric space, pre-trained for style-based visual search (SBVS)
The technical contribution is a hyper-network that predicts weight updates to a StyleGAN2 pre-trained over a diverse gamut of artistic content.
arXiv Detail & Related papers (2022-08-09T14:34:07Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer [103.54337984566877]
Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data.
We introduce a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain.
Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control.
arXiv Detail & Related papers (2022-03-24T17:57:11Z) - Arbitrary Video Style Transfer via Multi-Channel Correlation [84.75377967652753]
We propose Multi-Channel Correction network (MCCNet) to fuse exemplar style features and input content features for efficient style transfer.
MCCNet works directly on the feature space of style and content domain where it learns to rearrange and fuse style features based on similarity with content features.
The outputs generated by MCC are features containing the desired style patterns which can further be decoded into images with vivid style textures.
arXiv Detail & Related papers (2020-09-17T01:30:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.