A Survey of Machine Learning Techniques in Adversarial Image Forensics
- URL: http://arxiv.org/abs/2010.09680v1
- Date: Mon, 19 Oct 2020 17:16:38 GMT
- Title: A Survey of Machine Learning Techniques in Adversarial Image Forensics
- Authors: Ehsan Nowroozi, Ali Dehghantanha, Reza M. Parizi, Kim-Kwang Raymond
Choo
- Abstract summary: Image forensic plays a crucial role in both criminal investigations and civil litigation.
Machine learning approaches are also utilized in image forensics.
This paper surveys techniques that can be used to enhance the robustness of machine learning-based binary manipulation detectors.
- Score: 45.219116050446786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image forensic plays a crucial role in both criminal investigations (e.g.,
dissemination of fake images to spread racial hate or false narratives about
specific ethnicity groups) and civil litigation (e.g., defamation).
Increasingly, machine learning approaches are also utilized in image forensics.
However, there are also a number of limitations and vulnerabilities associated
with machine learning-based approaches, for example how to detect adversarial
(image) examples, with real-world consequences (e.g., inadmissible evidence, or
wrongful conviction). Therefore, with a focus on image forensics, this paper
surveys techniques that can be used to enhance the robustness of machine
learning-based binary manipulation detectors in various adversarial scenarios.
Related papers
- MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Adversarial Learning in Real-World Fraud Detection: Challenges and
Perspectives [1.5373344688357016]
Fraudulent activities and adversarial attacks threaten machine learning models.
We describe how attacks against fraud detection systems differ from other applications of adversarial machine learning.
arXiv Detail & Related papers (2023-07-03T23:04:49Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - A Principled Design of Image Representation: Towards Forensic Tasks [75.40968680537544]
We investigate the forensic-oriented image representation as a distinct problem, from the perspectives of theory, implementation, and application.
At the theoretical level, we propose a new representation framework for forensics, called Dense Invariant Representation (DIR), which is characterized by stable description with mathematical guarantees.
We demonstrate the above arguments on the dense-domain pattern detection and matching experiments, providing comparison results with state-of-the-art descriptors.
arXiv Detail & Related papers (2022-03-02T07:46:52Z) - A dual benchmarking study of facial forgery and facial forensics [28.979062525272866]
In recent years, visual forgery has reached a level of sophistication that humans cannot identify fraud.
A rich body of visual forensic techniques has been proposed in an attempt to stop this dangerous trend.
We present a benchmark that provides in-depth insights into visual forgery and visual forensics.
arXiv Detail & Related papers (2021-11-25T05:01:08Z) - Deepfake Forensics via An Adversarial Game [99.84099103679816]
We advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities.
Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we propose a new adversarial training method that attempts to blur out these specific artifacts.
arXiv Detail & Related papers (2021-03-25T02:20:08Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Adversarial Machine Learning in Image Classification: A Survey Towards
the Defender's Perspective [1.933681537640272]
Adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms.
Deep Learning algorithms have been used in security-critical applications, such as biometric recognition systems and self-driving cars.
arXiv Detail & Related papers (2020-09-08T13:21:55Z) - Adversarial Attack on Deep Learning-Based Splice Localization [14.669890331986794]
Using a novel algorithm we demonstrate on three non end-to-end deep learning-based splice localization tools that hiding manipulations of images is feasible via adversarial attacks.
We find that the formed adversarial perturbations can be transferable among them regarding the deterioration of their localization performance.
arXiv Detail & Related papers (2020-04-17T20:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.