Segmentation-Based Parametric Painting
- URL: http://arxiv.org/abs/2311.14271v1
- Date: Fri, 24 Nov 2023 04:15:10 GMT
- Title: Segmentation-Based Parametric Painting
- Authors: Manuel Ladron de Guevara, Matthew Fisher, Aaron Hertzmann
- Abstract summary: We introduce a novel image-to-painting method that facilitates the creation of large-scale, high-fidelity paintings with human-like quality and stylistic variation.
We introduce a segmentation-based painting process and a dynamic attention map approach inspired by human painting strategies.
Our optimized batch processing and patch-based loss framework enable efficient handling of large canvases.
- Score: 22.967620358813214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel image-to-painting method that facilitates the creation
of large-scale, high-fidelity paintings with human-like quality and stylistic
variation. To process large images and gain control over the painting process,
we introduce a segmentation-based painting process and a dynamic attention map
approach inspired by human painting strategies, allowing optimization of brush
strokes to proceed in batches over different image regions, thereby capturing
both large-scale structure and fine details, while also allowing stylistic
control over detail. Our optimized batch processing and patch-based loss
framework enable efficient handling of large canvases, ensuring our painted
outputs are both aesthetically compelling and functionally superior as compared
to previous methods, as confirmed by rigorous evaluations. Code available at:
https://github.com/manuelladron/semantic\_based\_painting.git
Related papers
- PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference [62.72779589895124]
We make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.
We train a reward model with a dataset we construct, consisting of nearly 51,000 images annotated with human preferences.
Experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-29T11:49:39Z) - Sketch-guided Image Inpainting with Partial Discrete Diffusion Process [5.005162730122933]
We introduce a novel partial discrete diffusion process (PDDP) for sketch-guided inpainting.
PDDP corrupts the masked regions of the image and reconstructs these masked regions conditioned on hand-drawn sketches.
The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process.
arXiv Detail & Related papers (2024-04-18T07:07:38Z) - HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models [59.01600111737628]
HD-Painter is a training free approach that accurately follows prompts and coherently scales to high resolution image inpainting.
To this end, we design the Prompt-Aware Introverted Attention (PAIntA) layer enhancing self-attention scores.
Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches quantitatively and qualitatively.
arXiv Detail & Related papers (2023-12-21T18:09:30Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - Improve Deep Image Inpainting by Emphasizing the Complexity of Missing
Regions [20.245637164975594]
In this paper, we enhance the deep image inpainting models with the help of classical image complexity metrics.
A knowledge-assisted index composed of missingness complexity and forward loss is presented to guide the batch selection in the training procedure.
We experimentally demonstrate the improvements for several recently developed image inpainting models on various datasets.
arXiv Detail & Related papers (2022-02-13T09:14:52Z) - A Wasserstein GAN for Joint Learning of Inpainting and its Spatial
Optimisation [3.4392739159262145]
We propose the first generative adversarial network for spatial inpainting data optimisation.
In contrast to previous approaches, it allows joint training of an inpainting generator and a corresponding mask network.
This yields significant improvements in visual quality and speed over conventional models and also outperforms current optimisation networks.
arXiv Detail & Related papers (2022-02-11T14:02:36Z) - Semantic Layout Manipulation with High-Resolution Sparse Attention [106.59650698907953]
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map.
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
We propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512.
arXiv Detail & Related papers (2020-12-14T06:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.