Image Stylization: From Predefined to Personalized
- URL: http://arxiv.org/abs/2002.10945v1
- Date: Sat, 22 Feb 2020 06:48:28 GMT
- Title: Image Stylization: From Predefined to Personalized
- Authors: Ignacio Garcia-Dorado, Pascal Getreuer, Bartlomiej Wronski, Peyman
Milanfar
- Abstract summary: We present a framework for interactive design of new image stylizations using a wide range of predefined filter blocks.
Our results include over a dozen styles designed using our interactive tool, a set of styles created procedurally, and new filters trained with our BLADE approach.
- Score: 14.32038355309114
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a framework for interactive design of new image stylizations using
a wide range of predefined filter blocks. Both novel and off-the-shelf image
filtering and rendering techniques are extended and combined to allow the user
to unleash their creativity to intuitively invent, modify, and tune new styles
from a given set of filters. In parallel to this manual design, we propose a
novel procedural approach that automatically assembles sequences of filters,
leading to unique and novel styles. An important aim of our framework is to
allow for interactive exploration and design, as well as to enable videos and
camera streams to be stylized on the fly. In order to achieve this real-time
performance, we use the \textit{Best Linear Adaptive Enhancement} (BLADE)
framework -- an interpretable shallow machine learning method that simulates
complex filter blocks in real time. Our representative results include over a
dozen styles designed using our interactive tool, a set of styles created
procedurally, and new filters trained with our BLADE approach.
Related papers
- ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv Detail & Related papers (2024-07-08T02:00:17Z) - Rethink Arbitrary Style Transfer with Transformer and Contrastive Learning [11.900404048019594]
In this paper, we introduce an innovative technique to improve the quality of stylized images.
Firstly, we propose Style Consistency Instance Normalization (SCIN), a method to refine the alignment between content and style features.
In addition, we have developed an Instance-based Contrastive Learning (ICL) approach designed to understand relationships among various styles.
arXiv Detail & Related papers (2024-04-21T08:52:22Z) - PALP: Prompt Aligned Personalization of Text-to-Image Models [68.91005384187348]
Existing personalization methods compromise personalization ability or the alignment to complex prompts.
We propose a new approach focusing on personalization methods for a emphsingle prompt to address this issue.
Our method excels in improving text alignment, enabling the creation of images with complex and intricate prompts.
arXiv Detail & Related papers (2024-01-11T18:35:33Z) - Style Aligned Image Generation via Shared Attention [61.121465570763085]
We introduce StyleAligned, a technique designed to establish style alignment among a series of generated images.
By employing minimal attention sharing' during the diffusion process, our method maintains style consistency across images within T2I models.
Our method's evaluation across diverse styles and text prompts demonstrates high-quality and fidelity.
arXiv Detail & Related papers (2023-12-04T18:55:35Z) - MOSAIC: Multi-Object Segmented Arbitrary Stylization Using CLIP [0.0]
Style transfer driven by text prompts paved a new path for creatively stylizing the images without collecting an actual style image.
We propose a new method Multi-Object Segmented Arbitrary Stylization Using CLIP (MOSAIC) that can apply styles to different objects in the image based on the context extracted from the input prompt.
Our method can extend to any arbitrary objects, styles and produce high-quality images compared to the current state of art methods.
arXiv Detail & Related papers (2023-09-24T18:24:55Z) - Taming Encoder for Zero Fine-tuning Image Customization with
Text-to-Image Diffusion Models [55.04969603431266]
This paper proposes a method for generating images of customized objects specified by users.
The method is based on a general framework that bypasses the lengthy optimization required by previous approaches.
We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity.
arXiv Detail & Related papers (2023-04-05T17:59:32Z) - FastCLIPstyler: Optimisation-free Text-based Image Style Transfer Using
Style Representations [0.0]
We present FastCLIPstyler, a generalised text-based image style transfer model capable of stylising images in a single forward pass for arbitrary text inputs.
We also introduce EdgeCLIPstyler, a lightweight model designed for compatibility with resource-constrained devices.
arXiv Detail & Related papers (2022-10-07T11:16:36Z) - WISE: Whitebox Image Stylization by Example-based Learning [0.22835610890984162]
Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering.
We present an example-based image-processing system that can handle a multitude of stylization techniques.
Our method can be optimized in a style-transfer framework or learned in a generative-adversarial setting for image-to-image translation.
arXiv Detail & Related papers (2022-07-29T10:59:54Z) - A Fast Text-Driven Approach for Generating Artistic Content [11.295288894403754]
We propose a complete framework that generates visual art.
We implement an improved version that can generate a wide range of results with varying degrees of detail, style and structure.
To further enhance the results, we insert an artistic super-resolution module in the generative pipeline.
arXiv Detail & Related papers (2022-06-22T14:34:59Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.