MambaPainter: Neural Stroke-Based Rendering in a Single Step
- URL: http://arxiv.org/abs/2410.12524v1
- Date: Wed, 16 Oct 2024 13:02:45 GMT
- Title: MambaPainter: Neural Stroke-Based Rendering in a Single Step
- Authors: Tomoya Sawada, Marie Katsurai,
- Abstract summary: Stroke-based rendering aims to reconstruct an input image into an oil painting style by predicting brush stroke sequences.
We propose MambaPainter, capable of predicting a sequence of over 100 brush strokes in a single inference step, resulting in rapid translation.
- Score: 3.18005110016691
- License:
- Abstract: Stroke-based rendering aims to reconstruct an input image into an oil painting style by predicting brush stroke sequences. Conventional methods perform this prediction stroke-by-stroke or require multiple inference steps due to the limitations of a predictable number of strokes. This procedure leads to inefficient translation speed, limiting their practicality. In this study, we propose MambaPainter, capable of predicting a sequence of over 100 brush strokes in a single inference step, resulting in rapid translation. We achieve this sequence prediction by incorporating the selective state-space model. Additionally, we introduce a simple extension to patch-based rendering, which we use to translate high-resolution images, improving the visual quality with a minimal increase in computational cost. Experimental results demonstrate that MambaPainter can efficiently translate inputs to oil painting-style images compared to state-of-the-art methods. The codes are available at https://github.com/STomoya/MambaPainter.
Related papers
- AttentionPainter: An Efficient and Adaptive Stroke Predictor for Scene Painting [82.54770866332456]
Stroke-based Rendering (SBR) aims to decompose an input image into a sequence of parameterized strokes, which can be rendered into a painting that resembles the input image.
We propose AttentionPainter, an efficient and adaptive model for single-step neural painting.
arXiv Detail & Related papers (2024-10-21T18:36:45Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Im2Oil: Stroke-Based Oil Painting Rendering with Linearly Controllable
Fineness Via Adaptive Sampling [10.440767522370688]
This paper proposes a novel stroke-based rendering (SBR) method that translates images into vivid oil paintings.
A user opinion test demonstrates that people behave more preference toward our oil paintings than the results of other methods.
arXiv Detail & Related papers (2022-09-27T07:41:04Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [36.457204758975074]
We propose a novel Transformer-based framework, dubbed Paint Transformer, to predict the parameters of a stroke set with a feed forward network.
This way, our model can generate a set of strokes in parallel and obtain the final painting of size 512 * 512 in near real time.
Experiments demonstrate that our method achieves better painting performance than previous ones with cheaper training and inference costs.
arXiv Detail & Related papers (2021-08-09T04:18:58Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Stylized Neural Painting [0.0]
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
arXiv Detail & Related papers (2020-11-16T17:24:21Z) - Powers of layers for image-to-image translation [60.5529622990682]
We propose a simple architecture to address unpaired image-to-image translation tasks.
We start from an image autoencoder architecture with fixed weights.
For each task we learn a residual block operating in the latent space, which is iteratively called until the target domain is reached.
arXiv Detail & Related papers (2020-08-13T09:02:17Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.