Structure-guided Image Outpainting
- URL: http://arxiv.org/abs/2212.12326v1
- Date: Wed, 21 Dec 2022 20:24:24 GMT
- Title: Structure-guided Image Outpainting
- Authors: Xi Wang, Weixi Cheng, and Wenliang Jia
- Abstract summary: Image outpainting has difficulties caused by large-scale area loss and less legitimate neighboring information.
We propose a deep learning method based on Generative Adrial Network (GAN) and condition edges as structural prior.
The newly added semantic embedding loss is proved effective in practice.
- Score: 2.7215474244966296
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning techniques have made considerable progress in image inpainting,
restoration, and reconstruction in the last few years. Image outpainting, also
known as image extrapolation, lacks attention and practical approaches to be
fulfilled, owing to difficulties caused by large-scale area loss and less
legitimate neighboring information. These difficulties have made outpainted
images handled by most of the existing models unrealistic to human eyes and
spatially inconsistent. When upsampling through deconvolution to generate fake
content, the naive generation methods may lead to results lacking
high-frequency details and structural authenticity. Therefore, as our novelties
to handle image outpainting problems, we introduce structural prior as a
condition to optimize the generation quality and a new semantic embedding term
to enhance perceptual sanity. we propose a deep learning method based on
Generative Adversarial Network (GAN) and condition edges as structural prior in
order to assist the generation. We use a multi-phase adversarial training
scheme that comprises edge inference training, contents inpainting training,
and joint training. The newly added semantic embedding loss is proved effective
in practice.
Related papers
- Semi-LLIE: Semi-supervised Contrastive Learning with Mamba-based Low-light Image Enhancement [59.17372460692809]
This work proposes a mean-teacher-based semi-supervised low-light enhancement (Semi-LLIE) framework that integrates the unpaired data into model training.
We introduce a semantic-aware contrastive loss to faithfully transfer the illumination distribution, contributing to enhancing images with natural colors.
We also propose novel perceptive loss based on the large-scale vision-language Recognize Anything Model (RAM) to help generate enhanced images with richer textual details.
arXiv Detail & Related papers (2024-09-25T04:05:32Z) - Fill in the ____ (a Diffusion-based Image Inpainting Pipeline) [0.0]
Inpainting is the process of taking an image and generating lost or intentionally occluded portions.
Modern inpainting techniques have shown remarkable ability in generating sensible completions.
A critical gap in these existing models will be addressed, focusing on the ability to prompt and control what exactly is generated.
arXiv Detail & Related papers (2024-03-24T05:26:55Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Deep image prior inpainting of ancient frescoes in the Mediterranean
Alpine arc [0.3958317527488534]
DIP-based inpainting reduces artefacts and better adapts to contextual/non-local information, thus providing a valuable tool for art historians.
We apply such approach to reconstruct missing image contents in a dataset of highly damaged digital images of medieval paintings located into several chapels in the Mediterranean Alpine Arc.
arXiv Detail & Related papers (2023-06-25T11:19:47Z) - GRIG: Few-Shot Generative Residual Image Inpainting [27.252855062283825]
We present a novel few-shot generative residual image inpainting method that produces high-quality inpainting results.
The core idea is to propose an iterative residual reasoning method that incorporates Convolutional Neural Networks (CNNs) for feature extraction.
We also propose a novel forgery-patch adversarial training strategy to create faithful textures and detailed appearances.
arXiv Detail & Related papers (2023-04-24T12:19:06Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Image Inpainting with External-internal Learning and Monochromic
Bottleneck [39.89676105875726]
We propose an external-internal inpainting scheme with a monochromic bottleneck that helps image inpainting models remove these artifacts.
In the external learning stage, we reconstruct missing structures and details in the monochromic space to reduce the learning dimension.
In the internal learning stage, we propose a novel internal color propagation method with progressive learning strategies for consistent color restoration.
arXiv Detail & Related papers (2021-04-19T06:22:10Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.