FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios
- URL: http://arxiv.org/abs/2101.03321v1
- Date: Sat, 9 Jan 2021 09:06:08 GMT
- Title: FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios
- Authors: Vineet Mehta, Parul Gupta, Ramanathan Subramanian, and Abhinav Dhall
- Abstract summary: This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media.
FakeBuster is a standalone deep learning based solution, which enables a user to detect if another person's video is manipulated or spoofed during a video conferencing based meeting.
- Score: 9.10316289334594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a new DeepFake detector FakeBuster for detecting
impostors during video conferencing and manipulated faces on social media.
FakeBuster is a standalone deep learning based solution, which enables a user
to detect if another person's video is manipulated or spoofed during a video
conferencing based meeting. This tool is independent of video conferencing
solutions and has been tested with Zoom and Skype applications. It uses a 3D
convolutional neural network for predicting video segment-wise fakeness scores.
The network is trained on a combination of datasets such as Deeperforensics,
DFDC, VoxCeleb, and deepfake videos created using locally captured (for video
conferencing scenarios) images. This leads to different environments and
perturbations in the dataset, which improves the generalization of the deepfake
network.
Related papers
- Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - Vulnerability of Automatic Identity Recognition to Audio-Visual
Deepfakes [13.042731289687918]
We present the first realistic audio-visual database of deepfakes SWAN-DF, where lips and speech are well synchronized.
We demonstrate the vulnerability of a state of the art speaker recognition system, such as ECAPA-TDNN-based model from SpeechBrain.
arXiv Detail & Related papers (2023-11-29T14:18:04Z) - AVTENet: Audio-Visual Transformer-based Ensemble Network Exploiting
Multiple Experts for Video Deepfake Detection [53.448283629898214]
The recent proliferation of hyper-realistic deepfake videos has drawn attention to the threat of audio and visual forgeries.
Most previous work on detecting AI-generated fake videos only utilize visual modality or audio modality.
We propose an Audio-Visual Transformer-based Ensemble Network (AVTENet) framework that considers both acoustic manipulation and visual manipulation.
arXiv Detail & Related papers (2023-10-19T19:01:26Z) - FTFDNet: Learning to Detect Talking Face Video Manipulation with
Tri-Modality Interaction [9.780101247514366]
The optical flow of the fake talking face video is disordered especially in the lip region.
A novel audio-visual attention mechanism (AVAM) is proposed to discover more informative features.
The proposed FTFDNet is able to achieve a better detection performance than other state-of-the-art DeepFake video detection methods.
arXiv Detail & Related papers (2023-07-08T14:45:16Z) - DeFakePro: Decentralized DeepFake Attacks Detection using ENF
Authentication [66.2466055910145]
DeFakePro is a consensus mechanism-based Deepfake detection technique in online video conferencing tools.
The similarity in ENF signal fluctuations is utilized in the PoENF algorithm to authenticate the media broadcasted in conferencing tools.
arXiv Detail & Related papers (2022-07-22T01:22:11Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Evaluation of an Audio-Video Multimodal Deepfake Dataset using Unimodal
and Multimodal Detectors [18.862258543488355]
Deepfakes can cause security and privacy issues.
New domain of cloning human voices using deep-learning technologies is also emerging.
To develop a good deepfake detector, we need a detector that can detect deepfakes of multiple modalities.
arXiv Detail & Related papers (2021-09-07T11:00:20Z) - WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection [82.42495493102805]
We introduce a new dataset WildDeepfake which consists of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet.
We conduct a systematic evaluation of a set of baseline detection networks on both existing and our WildDeepfake datasets, and show that WildDeepfake is indeed a more challenging dataset, where the detection performance can decrease drastically.
arXiv Detail & Related papers (2021-01-05T11:10:32Z) - Sharp Multiple Instance Learning for DeepFake Video Detection [54.12548421282696]
We introduce a new problem of partial face attack in DeepFake video, where only video-level labels are provided but not all the faces in the fake videos are manipulated.
A sharp MIL (S-MIL) is proposed which builds direct mapping from instance embeddings to bag prediction.
Experiments on FFPMS and widely used DFDC dataset verify that S-MIL is superior to other counterparts for partially attacked DeepFake video detection.
arXiv Detail & Related papers (2020-08-11T08:52:17Z) - The DeepFake Detection Challenge (DFDC) Dataset [8.451007921188019]
Deepfakes are a technique that allows anyone to swap two identities in a single video.
To counter this emerging threat, we have constructed an extremely large face swap video dataset.
All recorded subjects agreed to participate in and have their likenesses modified during the construction of the face-swapped dataset.
arXiv Detail & Related papers (2020-06-12T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.