Neural arbitrary style transfer for portrait images using the attention
mechanism
- URL: http://arxiv.org/abs/2002.07643v1
- Date: Mon, 17 Feb 2020 13:59:58 GMT
- Title: Neural arbitrary style transfer for portrait images using the attention
mechanism
- Authors: S. A. Berezin, V.M. Volkova
- Abstract summary: Arbitrary style transfer is the task of synthesis of an image that has never been seen before.
In this paper, we consider an approach to solving this problem using the combined architecture of deep neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arbitrary style transfer is the task of synthesis of an image that has never
been seen before, using two given images: content image and style image. The
content image forms the structure, the basic geometric lines and shapes of the
resulting image, while the style image sets the color and texture of the
result. The word "arbitrary" in this context means the absence of any one
pre-learned style. So, for example, convolutional neural networks capable of
transferring a new style only after training or retraining on a new amount of
data are not con-sidered to solve such a problem, while networks based on the
attention mech-anism that are capable of performing such a transformation
without retraining - yes. An original image can be, for example, a photograph,
and a style image can be a painting of a famous artist. The resulting image in
this case will be the scene depicted in the original photograph, made in the
stylie of this picture. Recent arbitrary style transfer algorithms make it
possible to achieve good re-sults in this task, however, in processing portrait
images of people, the result of such algorithms is either unacceptable due to
excessive distortion of facial features, or weakly expressed, not bearing the
characteristic features of a style image. In this paper, we consider an
approach to solving this problem using the combined architecture of deep neural
networks with a attention mechanism that transfers style based on the contents
of a particular image segment: with a clear predominance of style over the form
for the background part of the im-age, and with the prevalence of content over
the form in the image part con-taining directly the image of a person.
Related papers
- PixelShuffler: A Simple Image Translation Through Pixel Rearrangement [0.0]
Style transfer is a widely researched application of image-to-image translation, where the goal is to synthesize an image that combines the content of one image with the style of another.
Existing state-of-the-art methods often rely on complex neural networks, including diffusion models and language models, to achieve high-quality style transfer.
We propose a novel pixel shuffle method that addresses the image-to-image translation problem generally with a specific demonstrative application in style transfer.
arXiv Detail & Related papers (2024-10-03T22:08:41Z) - Portrait Diffusion: Training-free Face Stylization with
Chain-of-Painting [64.43760427752532]
Face stylization refers to the transformation of a face into a specific portrait style.
Current methods require the use of example-based adaptation approaches to fine-tune pre-trained generative models.
This paper proposes a training-free face stylization framework, named Portrait Diffusion.
arXiv Detail & Related papers (2023-12-03T06:48:35Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Artistic Arbitrary Style Transfer [1.1279808969568252]
Arbitrary Style Transfer is a technique used to produce a new image from two images: a content image, and a style image.
Balancing the structure and style components has been the major challenge that other state-of-the-art algorithms have tried to solve.
In this work, we solved these problems by using a Deep Learning approach using Convolutional Neural Networks.
arXiv Detail & Related papers (2022-12-21T21:34:00Z) - Arbitrary Style Transfer with Structure Enhancement by Combining the
Global and Local Loss [51.309905690367835]
We introduce a novel arbitrary style transfer method with structure enhancement by combining the global and local loss.
Experimental results demonstrate that our method can generate higher-quality images with impressive visual effects.
arXiv Detail & Related papers (2022-07-23T07:02:57Z) - Diverse facial inpainting guided by exemplars [8.360536784609309]
This paper introduces EXE-GAN, a novel diverse and interactive facial inpainting framework.
The proposed facial inpainting is achieved based on generative adversarial networks by leveraging the global style of input image, the style, and exemplar style of image.
A variety of experimental results and comparisons on public CelebA-HQ and FFHQ datasets are presented to demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2022-02-13T16:29:45Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - Structural-analogy from a Single Image Pair [118.61885732829117]
In this paper, we explore the capabilities of neural networks to understand image structure given only a single pair of images, A and B.
We generate an image that keeps the appearance and style of B, but has a structural arrangement that corresponds to A.
Our method can be used to generate high quality imagery in other conditional generation tasks utilizing images A and B only.
arXiv Detail & Related papers (2020-04-05T14:51:10Z) - Local Facial Attribute Transfer through Inpainting [3.4376560669160394]
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction.
Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
We present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes.
arXiv Detail & Related papers (2020-02-07T22:57:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.