DAM-GAN : Image Inpainting using Dynamic Attention Map based on Fake
Texture Detection
- URL: http://arxiv.org/abs/2204.09442v1
- Date: Wed, 20 Apr 2022 13:15:52 GMT
- Title: DAM-GAN : Image Inpainting using Dynamic Attention Map based on Fake
Texture Detection
- Authors: Dongmin Cha, Daijin Kim
- Abstract summary: We introduce a GAN-based model using dynamic attention map (DAM-GAN)
Our proposed DAM-GAN concentrates on detecting fake texture and products dynamic attention maps to diminish pixel inconsistency from the feature maps in the generator.
Evaluation results on CelebA-HQ and Places2 datasets show the superiority of our network.
- Score: 6.872690425240007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural advancements have recently brought remarkable image synthesis
performance to the field of image inpainting. The adaptation of generative
adversarial networks (GAN) in particular has accelerated significant progress
in high-quality image reconstruction. However, although many notable GAN-based
networks have been proposed for image inpainting, still pixel artifacts or
color inconsistency occur in synthesized images during the generation process,
which are usually called fake textures. To reduce pixel inconsistency disorder
resulted from fake textures, we introduce a GAN-based model using dynamic
attention map (DAM-GAN). Our proposed DAM-GAN concentrates on detecting fake
texture and products dynamic attention maps to diminish pixel inconsistency
from the feature maps in the generator. Evaluation results on CelebA-HQ and
Places2 datasets with other image inpainting approaches show the superiority of
our network.
Related papers
- Rethinking the Up-Sampling Operations in CNN-based Generative Network
for Generalizable Deepfake Detection [86.97062579515833]
We introduce the concept of Neighboring Pixel Relationships(NPR) as a means to capture and characterize the generalized structural artifacts stemming from up-sampling operations.
A comprehensive analysis is conducted on an open-world dataset, comprising samples generated by tft28 distinct generative models.
This analysis culminates in the establishment of a novel state-of-the-art performance, showcasing a remarkable tft11.6% improvement over existing methods.
arXiv Detail & Related papers (2023-12-16T14:27:06Z) - Image Deblurring using GAN [0.0]
This project focuses on the application of Generative Adversarial Network (GAN) in image deblurring.
The project defines a GAN model inflow and trains it with GoPRO dataset.
The network can obtain sharper pixels in image, achieving an average of 29.3 Peak Signal-to-Noise Ratio (PSNR) and 0.72 Structural Similarity Assessment (SSIM)
arXiv Detail & Related papers (2023-12-15T02:43:30Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - ReGO: Reference-Guided Outpainting for Scenery Image [82.21559299694555]
generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
arXiv Detail & Related papers (2021-06-20T02:34:55Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Texture Transform Attention for Realistic Image Inpainting [6.275013056564918]
We propose a Texture Transform Attention network that better produces the missing region inpainting with fine details.
Texture Transform Attention is used to create a new reassembled texture map using fine textures and coarse semantics.
We evaluate our model end-to-end with the publicly available datasets CelebA-HQ and Places2.
arXiv Detail & Related papers (2020-12-08T06:28:51Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z) - Leveraging Frequency Analysis for Deep Fake Image Recognition [35.1862941141084]
Deep neural networks can generate images that are astonishingly realistic, so much so that it is often hard for humans to distinguish them from actual photos.
These achievements have been largely made possible by Generative Adversarial Networks (GANs)
In this paper, we show that in frequency space, GAN-generated images exhibit severe artifacts that can be easily identified.
arXiv Detail & Related papers (2020-03-19T11:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.