ReGO: Reference-Guided Outpainting for Scenery Image
- URL: http://arxiv.org/abs/2106.10601v1
- Date: Sun, 20 Jun 2021 02:34:55 GMT
- Title: ReGO: Reference-Guided Outpainting for Scenery Image
- Authors: Yaxiong Wang, Yunchao Wei, Xueming Qian, Li Zhu and Yi Yang
- Abstract summary: generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
- Score: 82.21559299694555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We aim to tackle the challenging yet practical scenery image outpainting task
in this work. Recently, generative adversarial learning has significantly
advanced the image outpainting by producing semantic consistent content for the
given image. However, the existing methods always suffer from the blurry
texture and the artifacts of the generative part, making the overall
outpainting results lack authenticity. To overcome the weakness, this work
investigates a principle way to synthesize texture-rich results by borrowing
pixels from its neighbors (\ie, reference images), named
\textbf{Re}ference-\textbf{G}uided \textbf{O}utpainting (ReGO). Particularly,
the ReGO designs an Adaptive Content Selection (ACS) module to transfer the
pixel of reference images for texture compensating of the target one. To
prevent the style of the generated part from being affected by the reference
images, a style ranking loss is further proposed to augment the ReGO to
synthesize style-consistent results. Extensive experiments on two popular
benchmarks, NS6K~\cite{yangzx} and NS8K~\cite{wang}, well demonstrate the
effectiveness of our ReGO.
Related papers
- Prune and Repaint: Content-Aware Image Retargeting for any Ratio [8.665919238538143]
We propose a content-aware method called PruneRepaint to balance the preservation of key semantics and image quality.
By focusing on the content and structure of the foreground, our PruneRepaint approach adaptively avoids key content loss and deformation.
arXiv Detail & Related papers (2024-10-30T10:02:42Z) - Panoramic Image Inpainting With Gated Convolution And Contextual
Reconstruction Loss [19.659176149635417]
We propose a panoramic image inpainting framework that consists of a Face Generator, a Cube Generator, a side branch, and two discriminators.
The proposed method is compared with state-of-the-art (SOTA) methods on SUN360 Street View dataset in terms of PSNR and SSIM.
arXiv Detail & Related papers (2024-02-05T11:58:08Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - RepMix: Representation Mixing for Robust Attribution of Synthesized
Images [15.698564265127432]
We present a solution capable of matching images invariant to their semantic content.
We then propose RepMix, our GAN fingerprinting technique based on representation mixing and a novel loss.
We show our approach improves significantly from existing GAN fingerprinting works on both semantic generalization and robustness.
arXiv Detail & Related papers (2022-07-05T14:14:06Z) - DAM-GAN : Image Inpainting using Dynamic Attention Map based on Fake
Texture Detection [6.872690425240007]
We introduce a GAN-based model using dynamic attention map (DAM-GAN)
Our proposed DAM-GAN concentrates on detecting fake texture and products dynamic attention maps to diminish pixel inconsistency from the feature maps in the generator.
Evaluation results on CelebA-HQ and Places2 datasets show the superiority of our network.
arXiv Detail & Related papers (2022-04-20T13:15:52Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Pixel-wise Conditioned Generative Adversarial Networks for Image
Synthesis and Completion [3.8807073304999355]
Generative Adversarial Networks (GANs) have proven successful for unsupervised image generation.
We investigate the effectiveness of conditioning GANs when very few pixel values are provided.
We propose a modelling framework which results in adding an explicit cost term to the GAN objective function to enforce pixel-wise conditioning.
arXiv Detail & Related papers (2020-02-04T13:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.