Progressive Feedback-Enhanced Transformer for Image Forgery Localization
- URL: http://arxiv.org/abs/2311.08910v1
- Date: Wed, 15 Nov 2023 12:31:43 GMT
- Title: Progressive Feedback-Enhanced Transformer for Image Forgery Localization
- Authors: Haochen Zhu, Gang Cao, Xianglin Huang
- Abstract summary: We propose a Progressive FeedbACk-enhanced Transformer (ProFact) network to achieve coarse-to-fine image forgery localization.
We present an effective strategy to automatically generate large-scale forged image samples close to real-world forensic scenarios.
Our proposed localizer greatly outperforms the state-of-the-art on the ability and robustness of image forgery localization.
- Score: 3.765051882812805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind detection of the forged regions in digital images is an effective
authentication means to counter the malicious use of local image editing
techniques. Existing encoder-decoder forensic networks overlook the fact that
detecting complex and subtle tampered regions typically requires more feedback
information. In this paper, we propose a Progressive FeedbACk-enhanced
Transformer (ProFact) network to achieve coarse-to-fine image forgery
localization. Specifically, the coarse localization map generated by an initial
branch network is adaptively fed back to the early transformer encoder layers
for enhancing the representation of positive features while suppressing
interference factors. The cascaded transformer network, combined with a
contextual spatial pyramid module, is designed to refine discriminative
forensic features for improving the forgery localization accuracy and
reliability. Furthermore, we present an effective strategy to automatically
generate large-scale forged image samples close to real-world forensic
scenarios, especially in realistic and coherent processing. Leveraging on such
samples, a progressive and cost-effective two-stage training protocol is
applied to the ProFact network. The extensive experimental results on nine
public forensic datasets show that our proposed localizer greatly outperforms
the state-of-the-art on the generalization ability and robustness of image
forgery localization. Code will be publicly available at
https://github.com/multimediaFor/ProFact.
Related papers
- In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Effective Image Tampering Localization via Enhanced Transformer and
Co-attention Fusion [5.691973573807887]
We propose an effective image tampering localization network (EITLNet) based on a two-branch enhanced transformer encoder.
The features extracted from RGB and noise streams are fused effectively by the coordinate attention-based fusion module.
arXiv Detail & Related papers (2023-09-17T15:43:06Z) - ReContrast: Domain-Specific Anomaly Detection via Contrastive
Reconstruction [29.370142078092375]
Most advanced unsupervised anomaly detection (UAD) methods rely on modeling feature representations of frozen encoder networks pre-trained on large-scale datasets.
We propose a novel epistemic UAD method, namely ReContrast, which optimize the entire network to reduce biases towards the pre-trained image domain.
We conduct experiments across two popular industrial defect detection benchmarks and three medical image UAD tasks, which shows our superiority over current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-05T05:21:15Z) - Effective Image Tampering Localization via Semantic Segmentation Network [0.4297070083645049]
Existing image forensic methods still face challenges of low accuracy and robustness.
We propose an effective image tampering localization scheme based on deep semantic segmentation network.
arXiv Detail & Related papers (2022-08-29T17:22:37Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - TransForensics: Image Forgery Localization with Dense Self-Attention [37.2172540238706]
We introduce TransForensics, a novel image forgery localization method inspired by Transformers.
The two major components in our framework are dense self-attention encoders and dense correction modules.
By conducting experiments on main benchmarks, we show that TransForensics outperforms the stateof-the-art methods by a large margin.
arXiv Detail & Related papers (2021-08-09T08:43:26Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - In-Domain GAN Inversion for Real Image Editing [56.924323432048304]
A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code.
Existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space.
We propose an in-domain GAN inversion approach, which faithfully reconstructs the input image and ensures the inverted code to be semantically meaningful for editing.
arXiv Detail & Related papers (2020-03-31T18:20:18Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.