Image Inpainting using Partial Convolution
- URL: http://arxiv.org/abs/2108.08791v1
- Date: Thu, 19 Aug 2021 17:01:27 GMT
- Title: Image Inpainting using Partial Convolution
- Authors: Harsh Patel, Amey Kulkarni, Shivam Sahni, Udit Vyas
- Abstract summary: The aim of this paper is to perform image inpainting using robust deep learning methods that use partial convolution layers.
In various practical applications, images are often deteriorated by noise due to the presence of corrupted, lost, or undesirable information.
- Score: 0.3441021278275805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image Inpainting is one of the very popular tasks in the field of image
processing with broad applications in computer vision. In various practical
applications, images are often deteriorated by noise due to the presence of
corrupted, lost, or undesirable information. There have been various
restoration techniques used in the past with both classical and deep learning
approaches for handling such issues. Some traditional methods include image
restoration by filling gap pixels using the nearby known pixels or using the
moving average over the same. The aim of this paper is to perform image
inpainting using robust deep learning methods that use partial convolution
layers.
Related papers
- Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Deep Image Matting: A Comprehensive Survey [85.77905619102802]
This paper presents a review of recent advancements in image matting in the era of deep learning.
We focus on two fundamental sub-tasks: auxiliary input-based image matting and automatic image matting.
We discuss relevant applications of image matting and highlight existing challenges and potential opportunities for future research.
arXiv Detail & Related papers (2023-04-10T15:48:55Z) - Deep Image Deblurring: A Survey [165.32391279761006]
Deblurring is a classic problem in low-level computer vision, which aims to recover a sharp image from a blurred input image.
Recent advances in deep learning have led to significant progress in solving this problem.
arXiv Detail & Related papers (2022-01-26T01:31:30Z) - Saliency Constrained Arbitrary Image Style Transfer using SIFT and DCNN [22.57205921266602]
When common neural style transfer methods are used, the textures and colors in the style image are usually transferred imperfectly to the content image.
This paper proposes a novel saliency constrained method to reduce or avoid such effects.
The experiments show that the saliency maps of source images can help find the correct matching and avoid artifacts.
arXiv Detail & Related papers (2022-01-14T09:00:55Z) - Image Inpainting Using AutoEncoder and Guided Selection of Predicted
Pixels [9.527576103168984]
In this paper, we propose a network for image inpainting. This network, similar to U-Net, extracts various features from images, leading to better results.
We improved the final results by replacing the damaged pixels with the recovered pixels of the output images.
arXiv Detail & Related papers (2021-12-17T00:10:34Z) - Deep Two-Stage High-Resolution Image Inpainting [0.0]
In this article, we propose a method that solves the problem of inpainting arbitrary-size images.
For this, we propose to use information from neighboring pixels by shifting the original image in four directions.
This approach can work with existing inpainting models, making them almost resolution independent without the need for retraining.
arXiv Detail & Related papers (2021-04-27T20:32:21Z) - TransFill: Reference-guided Image Inpainting by Merging Multiple Color
and Spatial Transformations [35.9576572490994]
We propose TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image.
We learn to adjust the color and apply a pixel-level warping to each homography-warped source image to make it more consistent with the target.
Our method achieves state-of-the-art performance on pairs of images across a variety of wide baselines and color differences, and generalizes to user-provided image pairs.
arXiv Detail & Related papers (2021-03-29T22:45:07Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - A Study of Image Pre-processing for Faster Object Recognition [0.0]
A good quality image gives better recognition or classification rate than any unprocessed noisy images.
It is more difficult to extract features from such unprocessed images which in-turn reduces object recognition or classification rate.
Our project proposes an image pre-processing method, so that the performance of selected Machine Learning algorithms or Deep Learning algorithms increases in terms of increased accuracy or reduced the number of training images.
arXiv Detail & Related papers (2020-10-31T02:55:17Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.