Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection
- URL: http://arxiv.org/abs/2212.05667v1
- Date: Mon, 12 Dec 2022 02:54:08 GMT
- Title: Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection
- Authors: Junke Wang, Zhenxin Li, Chao Zhang, Jingjing Chen, Zuxuan Wu, Larry S.
Davis, Yu-Gang Jiang
- Abstract summary: Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
- Score: 115.83992775004043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online media data, in the forms of images and videos, are becoming mainstream
communication channels. However, recent advances in deep learning, particularly
deep generative models, open the doors for producing perceptually convincing
images and videos at a low cost, which not only poses a serious threat to the
trustworthiness of digital information but also has severe societal
implications. This motivates a growing interest of research in media tampering
detection, i.e., using deep learning techniques to examine whether media data
have been maliciously manipulated. Depending on the content of the targeted
images, media forgery could be divided into image tampering and Deepfake
techniques. The former typically moves or erases the visual elements in
ordinary images, while the latter manipulates the expressions and even the
identity of human faces. Accordingly, the means of defense include image
tampering detection and Deepfake detection, which share a wide variety of
properties. In this paper, we provide a comprehensive review of the current
media tampering detection approaches, and discuss the challenges and trends in
this field for future research.
Related papers
- AMMeBa: A Large-Scale Survey and Dataset of Media-Based Misinformation In-The-Wild [1.4193873432298625]
We show the results of a two-year study using human raters to annotate online media-based misinformation.
We show the rise of generative AI-based content in misinformation claims.
We also show "simple" methods dominated historically, particularly context manipulations.
arXiv Detail & Related papers (2024-05-19T23:05:53Z) - Exploring Saliency Bias in Manipulation Detection [2.156234249946792]
Social media-fuelled explosion of fake news and misinformation supported by tampered images has led to growth in the development of models and datasets for image manipulation detection.
Existing detection methods mostly treat media objects in isolation, without considering the impact of specific manipulations on viewer perception.
We propose a framework to analyze the trends of visual and semantic saliency in popular image manipulation datasets and their impact on detection.
arXiv Detail & Related papers (2024-02-12T00:08:51Z) - Recent Advances in Digital Image and Video Forensics, Anti-forensics and Counter Anti-forensics [0.0]
Image and video forensics have recently gained increasing attention due to the proliferation of manipulated images and videos.
This survey explores image and video identification and forgery detection covering both manipulated digital media and generative media.
arXiv Detail & Related papers (2024-02-03T09:01:34Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - Robust Deepfake On Unrestricted Media: Generation And Detection [46.576556314444865]
Recent advances in deep learning have led to substantial improvements in deepfake generation.
This chapter explores the evolution of and challenges in deepfake generation and detection.
arXiv Detail & Related papers (2022-02-13T06:53:39Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z) - Detecting Face2Face Facial Reenactment in Videos [76.9573023955201]
This research proposes a learning-based algorithm for detecting reenactment based alterations.
The proposed algorithm uses a multi-stream network that learns regional artifacts and provides a robust performance at various compression levels.
The results show state-of-the-art classification accuracy of 99.96%, 99.10%, and 91.20% for no, easy, and hard compression factors, respectively.
arXiv Detail & Related papers (2020-01-21T11:03:50Z) - Media Forensics and DeepFakes: an overview [12.333160116225445]
The boundary between real and synthetic media has become very thin.
Deepfakes can be used to manipulate public opinion during elections, commit fraud, discredit or blackmail people.
There is an urgent need for automated tools capable of detecting false multimedia content.
arXiv Detail & Related papers (2020-01-18T00:13:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.