Style Aligned Image Generation via Shared Attention
- URL: http://arxiv.org/abs/2312.02133v2
- Date: Thu, 11 Jan 2024 13:51:40 GMT
- Title: Style Aligned Image Generation via Shared Attention
- Authors: Amir Hertz, Andrey Voynov, Shlomi Fruchter, Daniel Cohen-Or
- Abstract summary: We introduce StyleAligned, a technique designed to establish style alignment among a series of generated images.
By employing minimal attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models.
Our method's evaluation across diverse styles and text prompts demonstrates high-quality and fidelity.
- Score: 61.121465570763085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale Text-to-Image (T2I) models have rapidly gained prominence across
creative fields, generating visually compelling outputs from textual prompts.
However, controlling these models to ensure consistent style remains
challenging, with existing methods necessitating fine-tuning and manual
intervention to disentangle content and style. In this paper, we introduce
StyleAligned, a novel technique designed to establish style alignment among a
series of generated images. By employing minimal `attention sharing' during the
diffusion process, our method maintains style consistency across images within
T2I models. This approach allows for the creation of style-consistent images
using a reference style through a straightforward inversion operation. Our
method's evaluation across diverse styles and text prompts demonstrates
high-quality synthesis and fidelity, underscoring its efficacy in achieving
consistent style across various inputs.
Related papers
- Rethink Arbitrary Style Transfer with Transformer and Contrastive Learning [11.900404048019594]
In this paper, we introduce an innovative technique to improve the quality of stylized images.
Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features.
In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand relationships among various styles.
arXiv Detail & Related papers (2024-04-21T08:52:22Z) - InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation [5.364489068722223]
The concept of style is inherently underdetermined, encompassing a multitude of elements such as color, material, atmosphere, design, and structure.
Inversion-based methods are prone to style degradation, often resulting in the loss of fine-grained details.
adapter-based approaches frequently require meticulous weight tuning for each reference image to achieve a balance between style intensity and text controllability.
arXiv Detail & Related papers (2024-04-03T13:34:09Z) - Visual Style Prompting with Swapping Self-Attention [26.511518230332758]
We propose a novel approach to produce a diverse range of images while maintaining specific style elements and nuances.
During the denoising process, we keep the query from original features while swapping the key and value with those from reference features in the late self-attention layers.
Our method demonstrates superiority over existing approaches, best reflecting the style of the references and ensuring that resulting images match the text prompts most accurately.
arXiv Detail & Related papers (2024-02-20T12:51:17Z) - StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter [78.75422651890776]
StyleCrafter is a generic method that enhances pre-trained T2V models with a style control adapter.
To promote content-style disentanglement, we remove style descriptions from the text prompt and extract style information solely from the reference image.
StyleCrafter efficiently generates high-quality stylized videos that align with the content of the texts and resemble the style of the reference images.
arXiv Detail & Related papers (2023-12-01T03:53:21Z) - ControlStyle: Text-Driven Stylized Image Generation Using Diffusion
Priors [105.37795139586075]
We propose a new task for stylizing'' text-to-image models, namely text-driven stylized image generation.
We present a new diffusion model (ControlStyle) via upgrading a pre-trained text-to-image model with a trainable modulation network.
Experiments demonstrate the effectiveness of our ControlStyle in producing more visually pleasing and artistic results.
arXiv Detail & Related papers (2023-11-09T15:50:52Z) - StyleAdapter: A Unified Stylized Image Generation Model [97.24936247688824]
StyleAdapter is a unified stylized image generation model capable of producing a variety of stylized images.
It can be integrated with existing controllable synthesis methods, such as T2I-adapter and ControlNet.
arXiv Detail & Related papers (2023-09-04T19:16:46Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - Arbitrary Style Guidance for Enhanced Diffusion-Based Text-to-Image
Generation [13.894251782142584]
Diffusion-based text-to-image generation models like GLIDE and DALLE-2 have gained wide success recently.
We propose a novel style guidance method to support generating images using arbitrary style guided by a reference image.
arXiv Detail & Related papers (2022-11-14T20:52:57Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.