Self-Adversarial Training incorporating Forgery Attention for Image
Forgery Localization
- URL: http://arxiv.org/abs/2107.02434v1
- Date: Tue, 6 Jul 2021 07:20:08 GMT
- Title: Self-Adversarial Training incorporating Forgery Attention for Image
Forgery Localization
- Authors: Long Zhuo and Shunquan Tan and Bin Li and Jiwu Huang
- Abstract summary: We propose a self-adversarial training strategy that expands training data dynamically to achieve more robust performance.
We exploit a coarse-to-fine network to enhance the noise inconsistency between original and tampered regions.
Our proposed algorithm steadily outperforms state-of-the-art methods by a clear margin in different benchmark datasets.
- Score: 40.622844703837046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image editing techniques enable people to modify the content of an image
without leaving visual traces and thus may cause serious security risks. Hence
the detection and localization of these forgeries become quite necessary and
challenging. Furthermore, unlike other tasks with extensive data, there is
usually a lack of annotated forged images for training due to annotation
difficulties. In this paper, we propose a self-adversarial training strategy
and a reliable coarse-to-fine network that utilizes a self-attention mechanism
to localize forged regions in forgery images. The self-attention module is
based on a Channel-Wise High Pass Filter block (CW-HPF). CW-HPF leverages
inter-channel relationships of features and extracts noise features by high
pass filters. Based on the CW-HPF, a self-attention mechanism, called forgery
attention, is proposed to capture rich contextual dependencies of intrinsic
inconsistency extracted from tampered regions. Specifically, we append two
types of attention modules on top of CW-HPF respectively to model internal
interdependencies in spatial dimension and external dependencies among
channels. We exploit a coarse-to-fine network to enhance the noise
inconsistency between original and tampered regions. More importantly, to
address the issue of insufficient training data, we design a self-adversarial
training strategy that expands training data dynamically to achieve more robust
performance. Specifically, in each training iteration, we perform adversarial
attacks against our network to generate adversarial examples and train our
model on them. Extensive experimental results demonstrate that our proposed
algorithm steadily outperforms state-of-the-art methods by a clear margin in
different benchmark datasets.
Related papers
- Mutual-Guided Dynamic Network for Image Fusion [51.615598671899335]
We propose a novel mutual-guided dynamic network (MGDN) for image fusion, which allows for effective information utilization across different locations and inputs.
Experimental results on five benchmark datasets demonstrate that our proposed method outperforms existing methods on four image fusion tasks.
arXiv Detail & Related papers (2023-08-24T03:50:37Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Learning to Generate Training Datasets for Robust Semantic Segmentation [37.9308918593436]
We propose a novel approach to improve the robustness of semantic segmentation techniques.
We design Robusta, a novel conditional generative adversarial network to generate realistic and plausible perturbed images.
Our results suggest that this approach could be valuable in safety-critical applications.
arXiv Detail & Related papers (2023-08-01T10:02:26Z) - Unsupervised Domain-Specific Deblurring using Scale-Specific Attention [0.25797036386508543]
We propose unsupervised domain-specific deblurring using a scale-adaptive attention module (SAAM)
Our network does not require supervised pairs for training, and the deblurring mechanism is primarily guided by adversarial loss.
Different ablation studies show that our coarse-to-fine mechanism outperforms end-to-end unsupervised models and SAAM is able to attend better compared to attention models used in literature.
arXiv Detail & Related papers (2021-12-12T07:47:45Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - Self-paced and self-consistent co-training for semi-supervised image
segmentation [23.100800154116627]
Deep co-training has been proposed as an effective approach for image segmentation when annotated data is scarce.
In this paper, we improve existing approaches for semi-supervised segmentation with a self-paced and self-consistent co-training method.
arXiv Detail & Related papers (2020-10-31T17:41:03Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Attentive CutMix: An Enhanced Data Augmentation Approach for Deep
Learning Based Image Classification [58.20132466198622]
We propose Attentive CutMix, a naturally enhanced augmentation strategy based on CutMix.
In each training iteration, we choose the most descriptive regions based on the intermediate attention maps from a feature extractor.
Our proposed method is simple yet effective, easy to implement and can boost the baseline significantly.
arXiv Detail & Related papers (2020-03-29T15:01:05Z) - Hybrid Multiple Attention Network for Semantic Segmentation in Aerial
Images [24.35779077001839]
We propose a novel attention-based framework named Hybrid Multiple Attention Network (HMANet) to adaptively capture global correlations.
We introduce a simple yet effective region shuffle attention (RSA) module to reduce feature redundant and improve the efficiency of self-attention mechanism.
arXiv Detail & Related papers (2020-01-09T07:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.