WISE: Whitebox Image Stylization by Example-based Learning
- URL: http://arxiv.org/abs/2207.14606v1
- Date: Fri, 29 Jul 2022 10:59:54 GMT
- Title: WISE: Whitebox Image Stylization by Example-based Learning
- Authors: Winfried L\"otzsch, Max Reimann, Martin B\"ussemeyer, Amir Semmo,
J\"urgen D\"ollner, Matthias Trapp
- Abstract summary: Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering.
We present an example-based image-processing system that can handle a multitude of stylization techniques.
Our method can be optimized in a style-transfer framework or learned in a generative-adversarial setting for image-to-image translation.
- Score: 0.22835610890984162
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Image-based artistic rendering can synthesize a variety of expressive styles
using algorithmic image filtering. In contrast to deep learning-based methods,
these heuristics-based filtering techniques can operate on high-resolution
images, are interpretable, and can be parameterized according to various design
aspects. However, adapting or extending these techniques to produce new styles
is often a tedious and error-prone task that requires expert knowledge. We
propose a new paradigm to alleviate this problem: implementing algorithmic
image filtering techniques as differentiable operations that can learn
parametrizations aligned to certain reference styles. To this end, we present
WISE, an example-based image-processing system that can handle a multitude of
stylization techniques, such as watercolor, oil or cartoon stylization, within
a common framework. By training parameter prediction networks for global and
local filter parameterizations, we can simultaneously adapt effects to
reference styles and image content, e.g., to enhance facial features. Our
method can be optimized in a style-transfer framework or learned in a
generative-adversarial setting for image-to-image translation. We demonstrate
that jointly training an XDoG filter and a CNN for postprocessing can achieve
comparable results to a state-of-the-art GAN-based method.
Related papers
- PixelShuffler: A Simple Image Translation Through Pixel Rearrangement [0.0]
Style transfer is a widely researched application of image-to-image translation, where the goal is to synthesize an image that combines the content of one image with the style of another.
Existing state-of-the-art methods often rely on complex neural networks, including diffusion models and language models, to achieve high-quality style transfer.
We propose a novel pixel shuffle method that addresses the image-to-image translation problem generally with a specific demonstrative application in style transfer.
arXiv Detail & Related papers (2024-10-03T22:08:41Z) - Customizing Text-to-Image Models with a Single Image Pair [47.49970731632113]
Art reinterpretation is the practice of creating a variation of a reference work, making a paired artwork that exhibits a distinct artistic style.
We propose Pair Customization, a new customization method that learns stylistic difference from a single image pair and then applies the acquired style to the generation process.
arXiv Detail & Related papers (2024-05-02T17:59:52Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - TediGAN: Text-Guided Diverse Face Image Generation and Manipulation [52.83401421019309]
TediGAN is a framework for multi-modal image generation and manipulation with textual descriptions.
StyleGAN inversion module maps real images to the latent space of a well-trained StyleGAN.
visual-linguistic similarity learns the text-image matching by mapping the image and text into a common embedding space.
instance-level optimization is for identity preservation in manipulation.
arXiv Detail & Related papers (2020-12-06T16:20:19Z) - Style Intervention: How to Achieve Spatial Disentanglement with
Style-based Generators? [100.60938767993088]
We propose a lightweight optimization-based algorithm which could adapt to arbitrary input images and render natural translation effects under flexible objectives.
We verify the performance of the proposed framework in facial attribute editing on high-resolution images, where both photo-realism and consistency are required.
arXiv Detail & Related papers (2020-11-19T07:37:31Z) - Steering Self-Supervised Feature Learning Beyond Local Pixel Statistics [60.92229707497999]
We introduce a novel principle for self-supervised feature learning based on the discrimination of specific transformations of an image.
We demonstrate experimentally that learning to discriminate transformations such as LCI, image warping and rotations, yields features with state of the art generalization capabilities.
arXiv Detail & Related papers (2020-04-05T22:09:08Z) - Image Stylization: From Predefined to Personalized [14.32038355309114]
We present a framework for interactive design of new image stylizations using a wide range of predefined filter blocks.
Our results include over a dozen styles designed using our interactive tool, a set of styles created procedurally, and new filters trained with our BLADE approach.
arXiv Detail & Related papers (2020-02-22T06:48:28Z) - P$^2$-GAN: Efficient Style Transfer Using Single Style Image [2.703193151632043]
Style transfer is a useful image synthesis technique that can re-render given image into another artistic style.
Generative Adversarial Network (GAN) is a widely adopted framework toward this task for its better representation ability on local style patterns.
In this paper, a novel Patch Permutation GAN (P$2$-GAN) network that can efficiently learn the stroke style from a single style image is proposed.
arXiv Detail & Related papers (2020-01-21T12:08:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.