Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces
- URL: http://arxiv.org/abs/2211.12314v1
- Date: Tue, 22 Nov 2022 15:07:16 GMT
- Title: Attacking Image Splicing Detection and Localization Algorithms Using
Synthetic Traces
- Authors: Shengbang Fang, Matthew C Stamm
- Abstract summary: Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks.
In this paper, we propose a new GAN-based anti-forensic attack that is able to fool state-of-the-art splicing detection and localization algorithms.
- Score: 17.408491376238008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have enabled forensics researchers to
develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized
inconsistencies in forensic traces using Siamese neural networks, either
explicitly during analysis or implicitly during training. At the same time,
deep learning has enabled new forms of anti-forensic attacks, such as
adversarial examples and generative adversarial network (GAN) based attacks.
Thus far, however, no anti-forensic attack has been demonstrated against image
splicing detection and localization algorithms. In this paper, we propose a new
GAN-based anti-forensic attack that is able to fool state-of-the-art splicing
detection and localization algorithms such as EXIF-Net, Noiseprint, and
Forensic Similarity Graphs. This attack operates by adversarially training an
anti-forensic generator against a set of Siamese neural networks so that it is
able to create synthetic forensic traces. Under analysis, these synthetic
traces appear authentic and are self-consistent throughout an image. Through a
series of experiments, we demonstrate that our attack is capable of fooling
forensic splicing detection and localization algorithms without introducing
visually detectable artifacts into an attacked image. Additionally, we
demonstrate that our attack outperforms existing alternative attack approaches.
%
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Exploring the Adversarial Robustness of CLIP for AI-generated Image Detection [9.516391314161154]
We study the adversarial robustness of AI-generated image detectors, focusing on Contrastive Language-Image Pretraining (CLIP)-based methods.
CLIP-based detectors are found to be vulnerable to white-box attacks just like CNN-based detectors.
This analysis provides new insights into the properties of forensic detectors that can help to develop more effective strategies.
arXiv Detail & Related papers (2024-07-28T18:20:08Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - AntidoteRT: Run-time Detection and Correction of Poison Attacks on
Neural Networks [18.461079157949698]
backdoor poisoning attacks against image classification networks.
We propose lightweight automated detection and correction techniques against poisoning attacks.
Our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks.
arXiv Detail & Related papers (2022-01-31T23:42:32Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Making GAN-Generated Images Difficult To Spot: A New Attack Against
Synthetic Image Detectors [24.809185168969066]
We propose a new anti-forensic attack capable of fooling GAN-generated image detectors.
Our attack uses an adversarially trained generator to synthesize traces that these detectors associate with real images.
We show that our attack can fool eight state-of-the-art detection CNNs with synthetic images created using seven different GANs.
arXiv Detail & Related papers (2021-04-25T05:56:57Z) - Adversarial Attack on Deep Learning-Based Splice Localization [14.669890331986794]
Using a novel algorithm we demonstrate on three non end-to-end deep learning-based splice localization tools that hiding manipulations of images is feasible via adversarial attacks.
We find that the formed adversarial perturbations can be transferable among them regarding the deterioration of their localization performance.
arXiv Detail & Related papers (2020-04-17T20:31:38Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.