A dual benchmarking study of facial forgery and facial forensics
- URL: http://arxiv.org/abs/2111.12912v1
- Date: Thu, 25 Nov 2021 05:01:08 GMT
- Title: A dual benchmarking study of facial forgery and facial forensics
- Authors: Minh Tam Pham and Thanh Trung Huynh and Van Vinh Tong and Thanh Tam
Nguyen and Thanh Thi Nguyen and Hongzhi Yin and Quoc Viet Hung Nguyen
- Abstract summary: In recent years, visual forgery has reached a level of sophistication that humans cannot identify fraud.
A rich body of visual forensic techniques has been proposed in an attempt to stop this dangerous trend.
We present a benchmark that provides in-depth insights into visual forgery and visual forensics.
- Score: 28.979062525272866
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, visual forgery has reached a level of sophistication that
humans cannot identify fraud, which poses a significant threat to information
security. A wide range of malicious applications have emerged, such as fake
news, defamation or blackmailing of celebrities, impersonation of politicians
in political warfare, and the spreading of rumours to attract views. As a
result, a rich body of visual forensic techniques has been proposed in an
attempt to stop this dangerous trend. In this paper, we present a benchmark
that provides in-depth insights into visual forgery and visual forensics, using
a comprehensive and empirical approach. More specifically, we develop an
independent framework that integrates state-of-the-arts counterfeit generators
and detectors, and measure the performance of these techniques using various
criteria. We also perform an exhaustive analysis of the benchmarking results,
to determine the characteristics of the methods that serve as a comparative
reference in this never-ending war between measures and countermeasures.
Related papers
- Face De-identification: State-of-the-art Methods and Comparative Studies [32.333766763819796]
Face de-identification is regarded as an effective means to protect the privacy of facial images.
We provide a review of state-of-the-art face de-identification methods, categorized into three levels: pixel-level, representation-level, and semantic-level techniques.
arXiv Detail & Related papers (2024-11-15T01:00:00Z) - Datasets, Clues and State-of-the-Arts for Multimedia Forensics: An
Extensive Review [19.30075248247771]
This survey focusses on approaches for tampering detection in multimedia data using deep learning models.
It presents a detailed analysis of benchmark datasets for malicious manipulation detection that are publicly available.
It also offers a comprehensive list of tampering clues and commonly used deep learning architectures.
arXiv Detail & Related papers (2024-01-13T07:03:58Z) - GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency
Learning [63.547321642941974]
We introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model.
Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
arXiv Detail & Related papers (2023-11-13T04:48:33Z) - COMICS: End-to-end Bi-grained Contrastive Learning for Multi-face Forgery Detection [56.7599217711363]
Face forgery recognition methods can only process one face at a time.
Most face forgery recognition methods can only process one face at a time.
We propose COMICS, an end-to-end framework for multi-face forgery detection.
arXiv Detail & Related papers (2023-08-03T03:37:13Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Few-shot Forgery Detection via Guided Adversarial Interpolation [56.59499187594308]
Existing forgery detection methods suffer from significant performance drops when applied to unseen novel forgery approaches.
We propose Guided Adversarial Interpolation (GAI) to overcome the few-shot forgery detection problem.
Our method is validated to be robust to choices of majority and minority forgery approaches.
arXiv Detail & Related papers (2022-04-12T16:05:10Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - OpenForensics: Large-Scale Challenging Dataset For Multi-Face Forgery
Detection And Segmentation In-The-Wild [48.67582300190131]
This paper presents a study on two new countermeasure tasks: multi-face forgery detection and segmentation in-the-wild.
Localizing forged faces among multiple human faces in unrestricted natural scenes is far more challenging than the traditional deepfake recognition task.
With its rich annotations, our OpenForensics dataset has great potentials for research in both deepfake prevention and general human face detection.
arXiv Detail & Related papers (2021-07-30T08:15:41Z) - Adversarial Machine Learning in Image Classification: A Survey Towards
the Defender's Perspective [1.933681537640272]
Adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms.
Deep Learning algorithms have been used in security-critical applications, such as biometric recognition systems and self-driving cars.
arXiv Detail & Related papers (2020-09-08T13:21:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.