Detecting Deepfake Videos Using Euler Video Magnification
- URL: http://arxiv.org/abs/2101.11563v1
- Date: Wed, 27 Jan 2021 17:37:23 GMT
- Title: Detecting Deepfake Videos Using Euler Video Magnification
- Authors: Rashmiranjan Das and Gaurav Negi and Alan F. Smeaton
- Abstract summary: Deepfake videos are manipulating videos using advanced machine learning techniques.
In this paper, we examine a technique for possible identification of deepfake videos.
Our approach uses features extracted from the Euler technique to train three models to classify counterfeit and unaltered videos.
- Score: 1.8506048493564673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in artificial intelligence make it progressively hard to
distinguish between genuine and counterfeit media, especially images and
videos. One recent development is the rise of deepfake videos, based on
manipulating videos using advanced machine learning techniques. This involves
replacing the face of an individual from a source video with the face of a
second person, in the destination video. This idea is becoming progressively
refined as deepfakes are getting progressively seamless and simpler to compute.
Combined with the outreach and speed of social media, deepfakes could easily
fool individuals when depicting someone saying things that never happened and
thus could persuade people in believing fictional scenarios, creating distress,
and spreading fake news. In this paper, we examine a technique for possible
identification of deepfake videos. We use Euler video magnification which
applies spatial decomposition and temporal filtering on video data to highlight
and magnify hidden features like skin pulsation and subtle motions. Our
approach uses features extracted from the Euler technique to train three models
to classify counterfeit and unaltered videos and compare the results with
existing techniques.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - What Matters in Detecting AI-Generated Videos like Sora? [51.05034165599385]
Gap between synthetic and real-world videos remains under-explored.
In this study, we compare real-world videos with those generated by a state-of-the-art AI model, Stable Video Diffusion.
Our model is capable of detecting videos generated by Sora with high accuracy, even without exposure to any Sora videos during training.
arXiv Detail & Related papers (2024-06-27T23:03:58Z) - Unmasking Illusions: Understanding Human Perception of Audiovisual Deepfakes [49.81915942821647]
This paper aims to evaluate the human ability to discern deepfake videos through a subjective study.
We present our findings by comparing human observers to five state-ofthe-art audiovisual deepfake detection models.
We found that all AI models performed better than humans when evaluated on the same 40 videos.
arXiv Detail & Related papers (2024-05-07T07:57:15Z) - Comparative Analysis of Deep-Fake Algorithms [0.0]
Deepfakes, also known as deep learning-based fake videos, have become a major concern in recent years.
These deepfake videos can be used for malicious purposes such as spreading misinformation, impersonating individuals, and creating fake news.
Deepfake detection technologies use various approaches such as facial recognition, motion analysis, and audio-visual synchronization.
arXiv Detail & Related papers (2023-09-06T18:17:47Z) - Undercover Deepfakes: Detecting Fake Segments in Videos [1.2609216345578933]
deepfake generation is a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth.
In this paper, we present a deepfake detection method that can address this issue by performing deepfake prediction at the frame and video levels.
In particular, the paradigm we address will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes.
arXiv Detail & Related papers (2023-05-11T04:43:10Z) - Copy Motion From One to Another: Fake Motion Video Generation [53.676020148034034]
A compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion.
Current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos.
We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image.
Our method is able to generate realistic target person videos, faithfully copying complex motions from a source person.
arXiv Detail & Related papers (2022-05-03T08:45:22Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Watch Those Words: Video Falsification Detection Using Word-Conditioned
Facial Motion [82.06128362686445]
We propose a multi-modal semantic forensic approach to handle both cheapfakes and visually persuasive deepfakes.
We leverage the idea of attribution to learn person-specific biometric patterns that distinguish a given speaker from others.
Unlike existing person-specific approaches, our method is also effective against attacks that focus on lip manipulation.
arXiv Detail & Related papers (2021-12-21T01:57:04Z) - Detection of GAN-synthesized street videos [21.192357452920007]
This paper investigates the detectability of a new kind of AI-generated videos framing driving street sequences (here referred to as DeepStreets videos)
We present a simple frame-based detector, achieving very good performance on state-of-the-art DeepStreets videos generated by the Vid2vid architecture.
arXiv Detail & Related papers (2021-09-10T16:59:15Z) - Deepfake Video Forensics based on Transfer Learning [0.0]
"Deepfake" can create fake images and videos that humans cannot differentiate from the genuine ones.
This paper details retraining the image classification models to apprehend the features from each deepfake video frames.
When checking Deepfake videos, this technique received more than 87 per cent accuracy.
arXiv Detail & Related papers (2020-04-29T13:21:28Z) - Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to
Adversarial Examples [23.695497512694068]
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before.
Manipulated videos can fuel disinformation and reduce trust in media.
Recent developed Deepfake detection methods rely on deep neural networks (DNNs) to distinguish AI-generated fake videos from real videos.
arXiv Detail & Related papers (2020-02-09T07:10:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.