GIID-Net: Generalizable Image Inpainting Detection via Neural
Architecture Search and Attention
- URL: http://arxiv.org/abs/2101.07419v2
- Date: Fri, 29 Jan 2021 05:44:31 GMT
- Title: GIID-Net: Generalizable Image Inpainting Detection via Neural
Architecture Search and Attention
- Authors: Haiwei Wu and Jiantao Zhou
- Abstract summary: malicious use of advanced image inpainting tools has led to increasing threats to the reliability of image data.
To fight against the inpainting forgeries, we propose a novel end-to-end Generalizable Image Inpainting Detection Network (GIID-Net)
The proposed GIID-Net consists of three sub-blocks: the enhancement block, the extraction block and the decision block.
- Score: 19.599993572921065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) has demonstrated its powerful capabilities in the field of
image inpainting, which could produce visually plausible results. Meanwhile,
the malicious use of advanced image inpainting tools (e.g. removing key objects
to report fake news) has led to increasing threats to the reliability of image
data. To fight against the inpainting forgeries, in this work, we propose a
novel end-to-end Generalizable Image Inpainting Detection Network (GIID-Net),
to detect the inpainted regions at pixel accuracy. The proposed GIID-Net
consists of three sub-blocks: the enhancement block, the extraction block and
the decision block. Specifically, the enhancement block aims to enhance the
inpainting traces by using hierarchically combined special layers. The
extraction block, automatically designed by Neural Architecture Search (NAS)
algorithm, is targeted to extract features for the actual inpainting detection
tasks. In order to further optimize the extracted latent features, we integrate
global and local attention modules in the decision block, where the global
attention reduces the intra-class differences by measuring the similarity of
global features, while the local attention strengthens the consistency of local
features. Furthermore, we thoroughly study the generalizability of our
GIID-Net, and find that different training data could result in vastly
different generalization capability. Extensive experimental results are
presented to validate the superiority of the proposed GIID-Net, compared with
the state-of-the-art competitors. Our results would suggest that common
artifacts are shared across diverse image inpainting methods. Finally, we build
a public inpainting dataset of 10K image pairs for the future research in this
area.
Related papers
- Dense Feature Interaction Network for Image Inpainting Localization [28.028361409524457]
Inpainting can be used to conceal or alter image contents in malicious manipulation of images.
Existing methods mostly rely on a basic encoder-decoder structure, which often results in a high number of false positives.
In this paper, we describe a new method for inpainting detection based on a Dense Feature Interaction Network (DeFI-Net)
arXiv Detail & Related papers (2024-08-05T02:35:13Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [63.54342601757723]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Progressive with Purpose: Guiding Progressive Inpainting DNNs through
Context and Structure [0.0]
We propose a novel inpainting network that maintains the structural and contextual integrity of a processed image.
Inspired by the Gaussian and Laplacian pyramids, the core of the proposed network is a feature extraction module named GLE.
Our benchmarking experiments demonstrate that the proposed method achieves clear improvement in performance over many state-of-the-art inpainting algorithms.
arXiv Detail & Related papers (2022-09-21T02:15:02Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting [42.189768203036394]
We make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well.
Our approach outperforms existing detection methods by a large margin and generalizes well to unseen deep inpainting techniques.
arXiv Detail & Related papers (2021-06-03T01:29:29Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - A U-Net Based Discriminator for Generative Adversarial Networks [86.67102929147592]
We propose an alternative U-Net based discriminator architecture for generative adversarial networks (GANs)
The proposed architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images.
The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics.
arXiv Detail & Related papers (2020-02-28T11:16:54Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.