SPAST: Arbitrary Style Transfer with Style Priors via Pre-trained   Large-scale Model
        - URL: http://arxiv.org/abs/2505.08695v1
 - Date: Tue, 13 May 2025 15:54:36 GMT
 - Title: SPAST: Arbitrary Style Transfer with Style Priors via Pre-trained   Large-scale Model
 - Authors: Zhanjie Zhang, Quanwei Zhang, Junsheng Luan, Mengyuan Yang, Yun Wang, Lei Zhao, 
 - Abstract summary: arbitrary style transfer aims to render a new stylized image which preserves the content image's structure and possesses the style image's style.<n>Existing arbitrary style transfer methods are based on either small models or pre-trained large-scale models.<n>We propose a new framework, called SPAST, to generate high-quality stylized images with less inference time.
 - Score: 10.233013520083606
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Given an arbitrary content and style image, arbitrary style transfer aims to render a new stylized   image which preserves the content image's structure and possesses the style image's style. Existing   arbitrary style transfer methods are based on either small models or pre-trained large-scale models.   The small model-based methods fail to generate high-quality stylized images, bringing artifacts and   disharmonious patterns. The pre-trained large-scale model-based methods can generate high-quality   stylized images but struggle to preserve the content structure and cost long inference time. To this   end, we propose a new framework, called SPAST, to generate high-quality stylized images with   less inference time. Specifically, we design a novel Local-global Window Size Stylization Module   (LGWSSM)tofuse style features into content features. Besides, we introduce a novel style prior loss,   which can dig out the style priors from a pre-trained large-scale model into the SPAST and motivate   the SPAST to generate high-quality stylized images with short inference time.We conduct abundant   experiments to verify that our proposed method can generate high-quality stylized images and less   inference time compared with the SOTA arbitrary style transfer methods. 
 
       
      
        Related papers
        - Break Stylistic Sophon: Are We Really Meant to Confine the Imagination   in Style Transfer? [12.2238770989173]
StyleWallfacer is a groundbreaking unified training and inference framework.<n>It addresses various issues encountered in the style transfer process of traditional methods.<n>It delivers artist-level style transfer and text-driven stylization.
arXiv  Detail & Related papers  (2025-06-18T00:24:29Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with   Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv  Detail & Related papers  (2024-07-08T02:00:17Z) - MuseumMaker: Continual Style Customization without Catastrophic   Forgetting [50.12727620780213]
We propose MuseumMaker, a method that enables the synthesis of images by following a set of customized styles in a never-end manner.
When facing with a new customization style, we develop a style distillation loss module to extract and learn the styles of the training data for new image generation.
It can minimize the learning biases caused by content of new training images, and address the catastrophic overfitting issue induced by few-shot images.
arXiv  Detail & Related papers  (2024-04-25T13:51:38Z) - HiCAST: Highly Customized Arbitrary Style Transfer with Adapter Enhanced
  Diffusion Models [84.12784265734238]
The goal of Arbitrary Style Transfer (AST) is injecting the artistic features of a style reference into a given image/video.
We propose HiCAST, which is capable of explicitly customizing the stylization results according to various source of semantic clues.
A novel learning objective is leveraged for video diffusion model training, which significantly improve cross-frame temporal consistency.
arXiv  Detail & Related papers  (2024-01-11T12:26:23Z) - ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and
  Implicit Style Prompt Bank [9.99530386586636]
Artistic style transfer aims to repaint the content image with the learned artistic style.
Existing artistic style transfer methods can be divided into two categories: small model-based approaches and pre-trained large-scale model-based approaches.
We propose ArtBank, a novel artistic style transfer framework, to generate highly realistic stylized images.
arXiv  Detail & Related papers  (2023-12-11T05:53:40Z) - DIFF-NST: Diffusion Interleaving For deFormable Neural Style Transfer [27.39248034592382]
We propose using a new class of models to perform style transfer while enabling deformable style transfer.
We show how leveraging the priors of these models can expose new artistic controls at inference time.
arXiv  Detail & Related papers  (2023-07-09T12:13:43Z) - NeAT: Neural Artistic Tracing for Beautiful Style Transfer [29.38791171225834]
Style transfer is the task of reproducing semantic contents of a source image in the artistic style of a second target image.
We present NeAT, a new state-of-the art feed-forward style transfer method.
We use BBST-4M to improve and measure the generalization of NeAT across a huge variety of styles.
arXiv  Detail & Related papers  (2023-04-11T11:08:13Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
  Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv  Detail & Related papers  (2023-03-09T04:35:00Z) - FastCLIPstyler: Optimisation-free Text-based Image Style Transfer Using
  Style Representations [0.0]
We present FastCLIPstyler, a generalised text-based image style transfer model capable of stylising images in a single forward pass for arbitrary text inputs.
We also introduce EdgeCLIPstyler, a lightweight model designed for compatibility with resource-constrained devices.
arXiv  Detail & Related papers  (2022-10-07T11:16:36Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv  Detail & Related papers  (2022-05-19T13:11:24Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
  Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv  Detail & Related papers  (2021-04-12T11:53:53Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.