A Survey of Machine Learning Techniques in Adversarial Image Forensics
- URL: http://arxiv.org/abs/2010.09680v1
- Date: Mon, 19 Oct 2020 17:16:38 GMT
- Title: A Survey of Machine Learning Techniques in Adversarial Image Forensics
- Authors: Ehsan Nowroozi, Ali Dehghantanha, Reza M. Parizi, Kim-Kwang Raymond
Choo
- Abstract summary: Image forensic plays a crucial role in both criminal investigations and civil litigation.
Machine learning approaches are also utilized in image forensics.
This paper surveys techniques that can be used to enhance the robustness of machine learning-based binary manipulation detectors.
- Score: 45.219116050446786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image forensic plays a crucial role in both criminal investigations (e.g.,
dissemination of fake images to spread racial hate or false narratives about
specific ethnicity groups) and civil litigation (e.g., defamation).
Increasingly, machine learning approaches are also utilized in image forensics.
However, there are also a number of limitations and vulnerabilities associated
with machine learning-based approaches, for example how to detect adversarial
(image) examples, with real-world consequences (e.g., inadmissible evidence, or
wrongful conviction). Therefore, with a focus on image forensics, this paper
surveys techniques that can be used to enhance the robustness of machine
learning-based binary manipulation detectors in various adversarial scenarios.
Related papers
- Knowledge-Guided Prompt Learning for Deepfake Facial Image Detection [54.26588902144298]
We propose a knowledge-guided prompt learning method for deepfake facial image detection.
Specifically, we retrieve forgery-related prompts from large language models as expert knowledge to guide the optimization of learnable prompts.
Our proposed approach notably outperforms state-of-the-art methods.
arXiv Detail & Related papers (2025-01-01T02:18:18Z) - Is JPEG AI going to change image forensics? [50.92778618091496]
We investigate the counter-forensic effects of the forthcoming JPEG AI standard based on neural image compression.
We show that an increase in false alarms impairs the performance of leading forensic detectors when analyzing genuine content processed through JPEG AI.
arXiv Detail & Related papers (2024-12-04T12:07:20Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Adversarial Learning in Real-World Fraud Detection: Challenges and
Perspectives [1.5373344688357016]
Fraudulent activities and adversarial attacks threaten machine learning models.
We describe how attacks against fraud detection systems differ from other applications of adversarial machine learning.
arXiv Detail & Related papers (2023-07-03T23:04:49Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - A dual benchmarking study of facial forgery and facial forensics [28.979062525272866]
In recent years, visual forgery has reached a level of sophistication that humans cannot identify fraud.
A rich body of visual forensic techniques has been proposed in an attempt to stop this dangerous trend.
We present a benchmark that provides in-depth insights into visual forgery and visual forensics.
arXiv Detail & Related papers (2021-11-25T05:01:08Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - Adversarial Machine Learning in Image Classification: A Survey Towards
the Defender's Perspective [1.933681537640272]
Adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms.
Deep Learning algorithms have been used in security-critical applications, such as biometric recognition systems and self-driving cars.
arXiv Detail & Related papers (2020-09-08T13:21:55Z) - Adversarial Attack on Deep Learning-Based Splice Localization [14.669890331986794]
Using a novel algorithm we demonstrate on three non end-to-end deep learning-based splice localization tools that hiding manipulations of images is feasible via adversarial attacks.
We find that the formed adversarial perturbations can be transferable among them regarding the deterioration of their localization performance.
arXiv Detail & Related papers (2020-04-17T20:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.