Towards Coherent Image Inpainting Using Denoising Diffusion Implicit
Models
- URL: http://arxiv.org/abs/2304.03322v1
- Date: Thu, 6 Apr 2023 18:35:13 GMT
- Title: Towards Coherent Image Inpainting Using Denoising Diffusion Implicit
Models
- Authors: Guanhua Zhang, Jiabao Ji, Yang Zhang, Mo Yu, Tommi Jaakkola, Shiyu
Chang
- Abstract summary: We propose COPAINT, which can coherently inpaint the whole image without introducing mismatches.
COPAINT also uses the Bayesian framework to jointly modify both revealed and unrevealed regions.
Our experiments verify that COPAINT can outperform the existing diffusion-based methods under both objective and subjective metrics.
- Score: 43.83732051916894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image inpainting refers to the task of generating a complete, natural image
based on a partially revealed reference image. Recently, many research
interests have been focused on addressing this problem using fixed diffusion
models. These approaches typically directly replace the revealed region of the
intermediate or final generated images with that of the reference image or its
variants. However, since the unrevealed regions are not directly modified to
match the context, it results in incoherence between revealed and unrevealed
regions. To address the incoherence problem, a small number of methods
introduce a rigorous Bayesian framework, but they tend to introduce mismatches
between the generated and the reference images due to the approximation errors
in computing the posterior distributions. In this paper, we propose COPAINT,
which can coherently inpaint the whole image without introducing mismatches.
COPAINT also uses the Bayesian framework to jointly modify both revealed and
unrevealed regions, but approximates the posterior distribution in a way that
allows the errors to gradually drop to zero throughout the denoising steps,
thus strongly penalizing any mismatches with the reference image. Our
experiments verify that COPAINT can outperform the existing diffusion-based
methods under both objective and subjective metrics. The codes are available at
https://github.com/UCSB-NLP-Chang/CoPaint/.
Related papers
- Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - RecDiffusion: Rectangling for Image Stitching with Diffusion Models [53.824503710254206]
We introduce a novel diffusion-based learning framework, textbfRecDiffusion, for image stitching rectangling.
This framework combines Motion Diffusion Models (MDM) to generate motion fields, effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary.
arXiv Detail & Related papers (2024-03-28T06:22:45Z) - DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus
Segmentation [68.43628183890007]
We argue that domain gaps can also be caused by different foreground (nucleus)-background ratios.
First, we introduce a re-coloring method that relieves dramatic image color variations between different domains.
Second, we propose a new instance normalization method that is robust to the variation in the foreground-background ratios.
arXiv Detail & Related papers (2023-09-01T01:01:13Z) - Sequential edge detection using joint hierarchical Bayesian learning [5.182970026171219]
This paper introduces a new sparse Bayesian learning (SBL) algorithm that jointly recovers a temporal sequence of edge maps from noisy and under-sampled Fourier data.
Our numerical examples demonstrate that our new method compares favorably with more standard SBL approaches.
arXiv Detail & Related papers (2023-02-28T02:09:44Z) - MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image
Translation [29.03892463588357]
We present a novel method for exemplar-based image translation, called matching interleaved diffusion models (MIDMs)
We formulate a diffusion-based matching-and-generation framework that interleaves cross-domain matching and diffusion steps in the latent space.
To improve the reliability of the diffusion process, we design a confidence-aware process using cycle-consistency to consider only confident regions.
arXiv Detail & Related papers (2022-09-22T14:43:52Z) - Region-aware Attention for Image Inpainting [33.22497212024083]
We propose a novel region-aware attention (RA) module for inpainting images.
By avoiding the directly calculating corralation between each pixel pair in a single samples, the misleading of invalid information in holes can be avoided.
A learnable region dictionary (LRD) is introduced to store important information in the entire dataset.
Our methodscan generate semantically plausible results with realistic details.
arXiv Detail & Related papers (2022-04-03T06:26:22Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.