Paint2Pix: Interactive Painting based Progressive Image Synthesis and
Editing
- URL: http://arxiv.org/abs/2208.08092v1
- Date: Wed, 17 Aug 2022 06:08:11 GMT
- Title: Paint2Pix: Interactive Painting based Progressive Image Synthesis and
Editing
- Authors: Jaskirat Singh, Liang Zheng, Cameron Smith, Jose Echevarria
- Abstract summary: paint2pix learns to predict "what a user wants to draw" from rudimentary brushstroke inputs.
paint2pix can be used for progressive image synthesis from scratch.
Our approach also forms a surprisingly convenient approach for real image editing.
- Score: 23.143394242978125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controllable image synthesis with user scribbles is a topic of keen interest
in the computer vision community. In this paper, for the first time we study
the problem of photorealistic image synthesis from incomplete and primitive
human paintings. In particular, we propose a novel approach paint2pix, which
learns to predict (and adapt) "what a user wants to draw" from rudimentary
brushstroke inputs, by learning a mapping from the manifold of incomplete human
paintings to their realistic renderings. When used in conjunction with recent
works in autonomous painting agents, we show that paint2pix can be used for
progressive image synthesis from scratch. During this process, paint2pix allows
a novice user to progressively synthesize the desired image output, while
requiring just few coarse user scribbles to accurately steer the trajectory of
the synthesis process. Furthermore, we find that our approach also forms a
surprisingly convenient approach for real image editing, and allows the user to
perform a diverse range of custom fine-grained edits through the addition of
only a few well-placed brushstrokes. Supplemental video and demo are available
at https://1jsingh.github.io/paint2pix
Related papers
- ReShader: View-Dependent Highlights for Single Image View-Synthesis [5.736642774848791]
We propose to split the view synthesis process into two independent tasks of pixel reshading and relocation.
During the reshading process, we take the single image as the input and adjust its shading based on the novel camera.
This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image.
arXiv Detail & Related papers (2023-09-19T15:23:52Z) - Interactive Neural Painting [66.9376011879115]
This paper proposes the first approach for Interactive Neural Painting (NP)
We propose I-Paint, a novel method based on a conditional transformer Variational AutoEncoder (VAE) architecture with a two-stage decoder.
Our experiments show that our approach provides good stroke suggestions and compares favorably to the state of the art.
arXiv Detail & Related papers (2023-07-31T07:02:00Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - Novel View Synthesis of Humans using Differentiable Rendering [50.57718384229912]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2023-03-28T10:48:33Z) - High-Fidelity Guided Image Synthesis with Latent Diffusion Models [50.39294302741698]
The proposed approach outperforms the previous state-of-the-art by over 85.32% on the overall user satisfaction scores.
Human user study results show that the proposed approach outperforms the previous state-of-the-art by over 85.32% on the overall user satisfaction scores.
arXiv Detail & Related papers (2022-11-30T15:43:20Z) - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [95.45728042499836]
We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
arXiv Detail & Related papers (2021-11-30T02:42:31Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z) - Painting Many Pasts: Synthesizing Time Lapse Videos of Paintings [23.99927916916298]
We introduce a new video synthesis task: synthesizing time lapse videos depicting how a given painting might have been created.
We present a probabilistic model that, given a single image of a completed painting, recurrently synthesizes steps of the painting process.
We demonstrate that this model can be used to sample many time steps, enabling long-term video synthesis.
arXiv Detail & Related papers (2020-01-04T03:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.