Stylized Neural Painting
- URL: http://arxiv.org/abs/2011.08114v1
- Date: Mon, 16 Nov 2020 17:24:21 GMT
- Title: Stylized Neural Painting
- Authors: Zhengxia Zou (1), Tianyang Shi (2), Shuang Qiu (1), Yi Yuan (2),
Zhenwei Shi (3) ((1) University of Michigan, Ann Arbor, (2) NetEase Fuxi AI
Lab, (3) Beihang University)
- Abstract summary: This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes an image-to-painting translation method that generates
vivid and realistic painting artworks with controllable styles. Different from
previous image-to-image translation methods that formulate the translation as
pixel-wise prediction, we deal with such an artistic creation process in a
vectorized environment and produce a sequence of physically meaningful stroke
parameters that can be further used for rendering. Since a typical vector
render is not differentiable, we design a novel neural renderer which imitates
the behavior of the vector renderer and then frame the stroke prediction as a
parameter searching process that maximizes the similarity between the input and
the rendering output. We explored the zero-gradient problem on parameter
searching and propose to solve this problem from an optimal transportation
perspective. We also show that previous neural renderers have a parameter
coupling problem and we re-design the rendering network with a rasterization
network and a shading network that better handles the disentanglement of shape
and color. Experiments show that the paintings generated by our method have a
high degree of fidelity in both global appearance and local textures. Our
method can be also jointly optimized with neural style transfer that further
transfers visual style from other images. Our code and animated results are
available at \url{https://jiupinjia.github.io/neuralpainter/}.
Related papers
- MambaPainter: Neural Stroke-Based Rendering in a Single Step [3.18005110016691]
Stroke-based rendering aims to reconstruct an input image into an oil painting style by predicting brush stroke sequences.
We propose MambaPainter, capable of predicting a sequence of over 100 brush strokes in a single inference step, resulting in rapid translation.
arXiv Detail & Related papers (2024-10-16T13:02:45Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Deep Geometrized Cartoon Line Inbetweening [98.35956631655357]
Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
arXiv Detail & Related papers (2023-09-28T17:50:05Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Neural Style Transfer for Vector Graphics [3.8983556368110226]
Style transfer between vector images has not been considered.
Applying standard content and style losses insignificantly changes the vector image drawing style.
New method based on differentiableization can change the color and shape parameters of the content image corresponding to the drawing of the style image.
arXiv Detail & Related papers (2023-03-06T16:57:45Z) - Learning Diverse Tone Styles for Image Retouching [73.60013618215328]
We propose to learn diverse image retouching with normalizing flow-based architectures.
A joint-training pipeline is composed of a style encoder, a conditional RetouchNet, and the image tone style normalizing flow (TSFlow) module.
Our proposed method performs favorably against state-of-the-art methods and is effective in generating diverse results.
arXiv Detail & Related papers (2022-07-12T09:49:21Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Differentiable Drawing and Sketching [0.0]
We present a differentiable relaxation of the process of drawing points, lines and curves into a pixel.
This relaxation allows end-to-end differentiable programs and deep networks to be learned and optimised.
arXiv Detail & Related papers (2021-03-30T09:25:55Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.