Image Fine-grained Inpainting
- URL: http://arxiv.org/abs/2002.02609v2
- Date: Sun, 4 Oct 2020 03:52:51 GMT
- Title: Image Fine-grained Inpainting
- Authors: Zheng Hui, Jie Li, Xiumei Wang, Xinbo Gao
- Abstract summary: We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
- Score: 89.17316318927621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image inpainting techniques have shown promising improvement with the
assistance of generative adversarial networks (GANs) recently. However, most of
them often suffered from completed results with unreasonable structure or
blurriness. To mitigate this problem, in this paper, we present a one-stage
model that utilizes dense combinations of dilated convolutions to obtain larger
and more effective receptive fields. Benefited from the property of this
network, we can more easily recover large regions in an incomplete image. To
better train this efficient generator, except for frequently-used VGG feature
matching loss, we design a novel self-guided regression loss for concentrating
on uncertain areas and enhancing the semantic details. Besides, we devise a
geometrical alignment constraint item to compensate for the pixel-based
distance between prediction features and ground-truth ones. We also employ a
discriminator with local and global branches to ensure local-global contents
consistency. To further improve the quality of generated images, discriminator
feature matching on the local branch is introduced, which dynamically minimizes
the similarity of intermediate features between synthetic and ground-truth
patches. Extensive experiments on several public datasets demonstrate that our
approach outperforms current state-of-the-art methods. Code is available at
https://github.com/Zheng222/DMFN.
Related papers
- Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Look at the Neighbor: Distortion-aware Unsupervised Domain Adaptation
for Panoramic Semantic Segmentation [5.352137021024213]
The aim is to tackle the domain gaps caused by the style disparities and distortion problem from the non-uniformly distributed pixels of equirectangular projection (ERP)
We propose a novel UDA framework that can effectively address the distortion problems for panoramic semantic segmentation.
arXiv Detail & Related papers (2023-08-10T10:47:12Z) - Multi-cropping Contrastive Learning and Domain Consistency for
Unsupervised Image-to-Image Translation [5.562419999563734]
We propose a novel unsupervised image-to-image translation framework based on multi-cropping contrastive learning and domain consistency, called MCDUT.
In many image-to-image translation tasks, our method achieves state-of-the-art results, and the advantages of our method have been proven through comparison experiments and ablation research.
arXiv Detail & Related papers (2023-04-24T16:20:28Z) - Accurate Image Restoration with Attention Retractable Transformer [50.05204240159985]
We propose Attention Retractable Transformer (ART) for image restoration.
ART presents both dense and sparse attention modules in the network.
We conduct extensive experiments on image super-resolution, denoising, and JPEG compression artifact reduction tasks.
arXiv Detail & Related papers (2022-10-04T07:35:01Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Enhanced Residual Networks for Context-based Image Outpainting [0.0]
Deep models struggle to understand context and extrapolation through retained information.
Current models use generative adversarial networks to generate results which lack localized image feature consistency and appear fake.
We propose two methods to improve this issue: the use of a local and global discriminator, and the addition of residual blocks within the encoding section of the network.
arXiv Detail & Related papers (2020-05-14T05:14:26Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.