TransFill: Reference-guided Image Inpainting by Merging Multiple Color
and Spatial Transformations
- URL: http://arxiv.org/abs/2103.15982v1
- Date: Mon, 29 Mar 2021 22:45:07 GMT
- Title: TransFill: Reference-guided Image Inpainting by Merging Multiple Color
and Spatial Transformations
- Authors: Yuqian Zhou, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi
- Abstract summary: We propose TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image.
We learn to adjust the color and apply a pixel-level warping to each homography-warped source image to make it more consistent with the target.
Our method achieves state-of-the-art performance on pairs of images across a variety of wide baselines and color differences, and generalizes to user-provided image pairs.
- Score: 35.9576572490994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image inpainting is the task of plausibly restoring missing pixels within a
hole region that is to be removed from a target image. Most existing
technologies exploit patch similarities within the image, or leverage
large-scale training data to fill the hole using learned semantic and texture
information. However, due to the ill-posed nature of the inpainting task, such
methods struggle to complete larger holes containing complicated scenes. In
this paper, we propose TransFill, a multi-homography transformed fusion method
to fill the hole by referring to another source image that shares scene
contents with the target image. We first align the source image to the target
image by estimating multiple homographies guided by different depth levels. We
then learn to adjust the color and apply a pixel-level warping to each
homography-warped source image to make it more consistent with the target.
Finally, a pixel-level fusion module is learned to selectively merge the
different proposals. Our method achieves state-of-the-art performance on pairs
of images across a variety of wide baselines and color differences, and
generalizes to user-provided image pairs.
Related papers
- Continuous-Multiple Image Outpainting in One-Step via Positional Query
and A Diffusion-based Approach [104.2588068730834]
This paper pushes the technical frontier of image outpainting in two directions that have not been resolved in literature.
We develop a method that does not depend on a pre-trained backbone network.
We evaluate the proposed approach (called PQDiff) on public benchmarks, demonstrating its superior performance over state-of-the-art approaches.
arXiv Detail & Related papers (2024-01-28T13:00:38Z) - DIAR: Deep Image Alignment and Reconstruction using Swin Transformers [3.1000291317724993]
We create a dataset that contains images with image distortions.
We create perspective distortions with corresponding ground-truth homographies as labels.
We use our dataset to train Swin transformer models to analyze sequential image data.
arXiv Detail & Related papers (2023-10-17T21:59:45Z) - SuperInpaint: Learning Detail-Enhanced Attentional Implicit
Representation for Super-resolutional Image Inpainting [26.309834304515544]
We introduce a challenging image restoration task, referred to as SuperInpaint.
This task aims to reconstruct missing regions in low-resolution images and generate completed images with arbitrarily higher resolutions.
We propose the detail-enhanced attentional implicit representation that can achieve SuperInpaint with a single model.
arXiv Detail & Related papers (2023-07-26T20:28:58Z) - Unbiased Multi-Modality Guidance for Image Inpainting [27.286351511243502]
We develop an end-to-end multi-modality guided transformer network for image inpainting.
Within each transformer block, the proposed spatial-aware attention module can learn the multi-modal structural features efficiently.
Our method enriches semantically consistent context in an image based on discriminative information from multiple modalities.
arXiv Detail & Related papers (2022-08-25T03:13:43Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.