RLMiniStyler: Light-weight RL Style Agent for Arbitrary Sequential Neural Style Generation
- URL: http://arxiv.org/abs/2505.04424v1
- Date: Wed, 07 May 2025 13:57:42 GMT
- Title: RLMiniStyler: Light-weight RL Style Agent for Arbitrary Sequential Neural Style Generation
- Authors: Jing Hu, Chengming Feng, Shu Hu, Ming-Ching Chang, Xin Li, Xi Wu, Xin Wang,
- Abstract summary: Arbitrary style transfer aims to apply the style of any given artistic image to another content image.<n>We propose a novel reinforcement learning-based framework for arbitrary style transfer RLMiniStyler.
- Score: 24.933672152267803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Arbitrary style transfer aims to apply the style of any given artistic image to another content image. Still, existing deep learning-based methods often require significant computational costs to generate diverse stylized results. Motivated by this, we propose a novel reinforcement learning-based framework for arbitrary style transfer RLMiniStyler. This framework leverages a unified reinforcement learning policy to iteratively guide the style transfer process by exploring and exploiting stylization feedback, generating smooth sequences of stylized results while achieving model lightweight. Furthermore, we introduce an uncertainty-aware multi-task learning strategy that automatically adjusts loss weights to adapt to the content and style balance requirements at different training stages, thereby accelerating model convergence. Through a series of experiments across image various resolutions, we have validated the advantages of RLMiniStyler over other state-of-the-art methods in generating high-quality, diverse artistic image sequences at a lower cost. Codes are available at https://github.com/fengxiaoming520/RLMiniStyler.
Related papers
- Pluggable Style Representation Learning for Multi-Style Transfer [41.09041735653436]
We develop a style transfer framework by decoupling the style modeling and transferring.<n>For style modeling, we propose a style representation learning scheme to encode the style information into a compact representation.<n>For style transferring, we develop a style-aware multi-style transfer network (SaMST) to adapt to diverse styles using pluggable style representations.
arXiv Detail & Related papers (2025-03-26T09:44:40Z) - Harnessing the Latent Diffusion Model for Training-Free Image Style Transfer [24.46409405016844]
Style transfer task is one of those challenges that transfers the visual attributes of a style image to another content image.
We propose a training-free style transfer algorithm, Style Tracking Reverse Diffusion Process (STRDP) for a pretrained Latent Diffusion Model (LDM)
Our algorithm employs Adaptive Instance Normalization (AdaIN) function in a distinct manner during the reverse diffusion process of an LDM.
arXiv Detail & Related papers (2024-10-02T09:28:21Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - MuseumMaker: Continual Style Customization without Catastrophic Forgetting [50.12727620780213]
We propose MuseumMaker, a method that enables the synthesis of images by following a set of customized styles in a never-end manner.
When facing with a new customization style, we develop a style distillation loss module to extract and learn the styles of the training data for new image generation.
It can minimize the learning biases caused by content of new training images, and address the catastrophic overfitting issue induced by few-shot images.
arXiv Detail & Related papers (2024-04-25T13:51:38Z) - Controlling Neural Style Transfer with Deep Reinforcement Learning [55.480819498109746]
We propose the first deep Reinforcement Learning based architecture that splits one-step style transfer into a step-wise process.
Our method tends to preserve more details and structures of the content image in early steps, and synthesize more style patterns in later steps.
arXiv Detail & Related papers (2023-09-30T15:01:02Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Style-Agnostic Reinforcement Learning [9.338454092492901]
We present a novel method of learning style-agnostic representation using both style transfer and adversarial learning.
Our method trains the actor with diverse image styles generated from an inherent adversarial style generator.
We verify that our method achieves competitive or better performances than the state-of-the-art approaches on Procgen and Distracting Control Suite benchmarks.
arXiv Detail & Related papers (2022-08-31T13:45:00Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.