Reference-based Painterly Inpainting via Diffusion: Crossing the Wild
Reference Domain Gap
- URL: http://arxiv.org/abs/2307.10584v1
- Date: Thu, 20 Jul 2023 04:51:10 GMT
- Title: Reference-based Painterly Inpainting via Diffusion: Crossing the Wild
Reference Domain Gap
- Authors: Dejia Xu, Xingqian Xu, Wenyan Cong, Humphrey Shi, Zhangyang Wang
- Abstract summary: RefPaint is a novel task that crosses the wild reference domain gap and implants novel objects into artworks.
Our method enables creative painterly image inpainting with reference objects that would otherwise be difficult to achieve.
- Score: 80.19252970827552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Have you ever imagined how it would look if we placed new objects into
paintings? For example, what would it look like if we placed a basketball into
Claude Monet's ``Water Lilies, Evening Effect''? We propose Reference-based
Painterly Inpainting, a novel task that crosses the wild reference domain gap
and implants novel objects into artworks. Although previous works have examined
reference-based inpainting, they are not designed for large domain
discrepancies between the target and the reference, such as inpainting an
artistic image using a photorealistic reference. This paper proposes a novel
diffusion framework, dubbed RefPaint, to ``inpaint more wildly'' by taking such
references with large domain gaps. Built with an image-conditioned diffusion
model, we introduce a ladder-side branch and a masked fusion mechanism to work
with the inpainting mask. By decomposing the CLIP image embeddings at inference
time, one can manipulate the strength of semantic and style information with
ease. Experiments demonstrate that our proposed RefPaint framework produces
significantly better results than existing methods. Our method enables creative
painterly image inpainting with reference objects that would otherwise be
difficult to achieve. Project page: https://vita-group.github.io/RefPaint/
Related papers
- Inverse Painting: Reconstructing The Painting Process [24.57538165449989]
We formulate this as an autoregressive image generation problem, in which an initially blank "canvas" is iteratively updated.
The model learns from real artists by training on many painting videos.
arXiv Detail & Related papers (2024-09-30T17:56:52Z) - RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting [63.567363455092234]
RefFusion is a novel 3D inpainting method based on a multi-scale personalization of an image inpainting diffusion model to the given reference view.
Our framework achieves state-of-the-art results for object removal while maintaining high controllability.
arXiv Detail & Related papers (2024-04-16T17:50:02Z) - RealFill: Reference-Driven Generation for Authentic Image Completion [84.98377627001443]
RealFill is a generative inpainting model that is personalized using only a few reference images of a scene.
RealFill is able to complete a target image with visually compelling contents that are faithful to the original scene.
arXiv Detail & Related papers (2023-09-28T17:59:29Z) - Inst-Inpaint: Instructing to Remove Objects with Diffusion Models [18.30057229657246]
In this work, we are interested in an image inpainting algorithm that estimates which object to be removed based on natural language input and removes it, simultaneously.
We present a novel inpainting framework, Inst-Inpaint, that can remove objects from images based on the instructions given as text prompts.
arXiv Detail & Related papers (2023-04-06T17:29:50Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - Markpainting: Adversarial Machine Learning meets Inpainting [17.52885087481822]
Inpainting is a learned technique that is used to populate masked or missing pieces in an image.
We show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information.
We show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove.
arXiv Detail & Related papers (2021-06-01T17:45:52Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Paint by Word [32.05329583044764]
We investigate the problem of zero-shot semantic image painting.
Instead of painting modifications into an image using only concrete colors or a finite set of semantic concepts, we ask how to create semantic paint based on open full-text descriptions.
Our method combines a state-of-the art generative model of realistic images with a state-of-the-art text-image semantic similarity network.
arXiv Detail & Related papers (2021-03-19T17:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.