Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes
- URL: http://arxiv.org/abs/2003.06877v3
- Date: Fri, 10 Jul 2020 10:58:49 GMT
- Title: Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes
- Authors: Liang Liao, Jing Xiao, Zheng Wang, Chia-Wen Lin, Shin'ichi Satoh
- Abstract summary: We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
- Score: 54.836331922449666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Completing a corrupted image with correct structures and reasonable textures
for a mixed scene remains an elusive challenge. Since the missing hole in a
mixed scene of a corrupted image often contains various semantic information,
conventional two-stage approaches utilizing structural information often lead
to the problem of unreliable structural prediction and ambiguous image texture
generation. In this paper, we propose a Semantic Guidance and Evaluation
Network (SGE-Net) to iteratively update the structural priors and the inpainted
image in an interplay framework of semantics extraction and image inpainting.
It utilizes semantic segmentation map as guidance in each scale of inpainting,
under which location-dependent inferences are re-evaluated, and, accordingly,
poorly-inferred regions are refined in subsequent scales. Extensive experiments
on real-world images of mixed scenes demonstrated the superiority of our
proposed method over state-of-the-art approaches, in terms of clear boundaries
and photo-realistic textures.
Related papers
- ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Semantic Image Translation for Repairing the Texture Defects of Building
Models [16.764719266178655]
We introduce a novel approach for synthesizing faccade texture images that authentically reflect the architectural style from a structured label map.
Our proposed method is also capable of synthesizing texture images with specific styles for faccades that lack pre-existing textures.
arXiv Detail & Related papers (2023-03-30T14:38:53Z) - Structure-Guided Image Completion with Image-level and Object-level Semantic Discriminators [97.12135238534628]
We propose a learning paradigm that consists of semantic discriminators and object-level discriminators for improving the generation of complex semantics and objects.
Specifically, the semantic discriminators leverage pretrained visual features to improve the realism of the generated visual concepts.
Our proposed scheme significantly improves the generation quality and achieves state-of-the-art results on various tasks.
arXiv Detail & Related papers (2022-12-13T01:36:56Z) - Instance-Aware Image Completion [15.64981939298373]
We propose a novel image completion model, dubbed ImComplete, that hallucinates the missing instance that harmonizes well with - and thus preserves - the original context.
ImComplete first adopts a transformer architecture that considers the visible instances and the location of the missing region.
Then, ImComplete completes the semantic segmentation masks within the missing region, providing pixel-level semantic and structural guidance.
arXiv Detail & Related papers (2022-10-22T04:38:00Z) - Reference-Guided Texture and Structure Inference for Image Inpainting [25.775006005766222]
We build a benchmark dataset containing 10K pairs of input and reference images for reference-guided inpainting.
We adopt an encoder-decoder structure to infer the texture and structure features of the input image.
A feature alignment module is further designed to refine these features of the input image with the guidance of a reference image.
arXiv Detail & Related papers (2022-07-29T06:26:03Z) - Marginal Contrastive Correspondence for Guided Image Generation [58.0605433671196]
Exemplar-based image translation establishes dense correspondences between a conditional input and an exemplar from two different domains.
Existing work builds the cross-domain correspondences implicitly by minimizing feature-wise distances across the two domains.
We design a Marginal Contrastive Learning Network (MCL-Net) that explores contrastive learning to learn domain-invariant features for realistic exemplar-based image translation.
arXiv Detail & Related papers (2022-04-01T13:55:44Z) - GLocal: Global Graph Reasoning and Local Structure Transfer for Person
Image Generation [2.580765958706854]
We focus on person image generation, namely, generating person image under various conditions, e.g., corrupted texture or different pose.
We present a GLocal framework to improve the occlusion-aware texture estimation by globally reasoning the style inter-correlations among different semantic regions.
For local structural information preservation, we further extract the local structure of the source image and regain it in the generated image via local structure transfer.
arXiv Detail & Related papers (2021-12-01T03:54:30Z) - Context-Aware Image Inpainting with Learned Semantic Priors [100.99543516733341]
We introduce pretext tasks that are semantically meaningful to estimating the missing contents.
We propose a context-aware image inpainting model, which adaptively integrates global semantics and local features.
arXiv Detail & Related papers (2021-06-14T08:09:43Z) - Image Inpainting Guided by Coherence Priors of Semantics and Textures [62.92586889409379]
We introduce coherence priors between the semantics and textures which make it possible to concentrate on completing separate textures in a semantic-wise manner.
We also propose two coherence losses to constrain the consistency between the semantics and the inpainted image in terms of the overall structure and detailed textures.
arXiv Detail & Related papers (2020-12-15T02:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.