IDRetracor: Towards Visual Forensics Against Malicious Face Swapping
- URL: http://arxiv.org/abs/2408.06635v1
- Date: Tue, 13 Aug 2024 04:53:48 GMT
- Title: IDRetracor: Towards Visual Forensics Against Malicious Face Swapping
- Authors: Jikang Cheng, Jiaxin Ai, Zhen Han, Chao Liang, Qin Zou, Zhongyuan Wang, Qian Wang,
- Abstract summary: Face swapping technique based on deepfake methods poses significant social risks to personal identity security.
We propose a novel task named face retracing, which considers retracing the original target face from the given fake one via inverse mapping.
We show that the IDRetracor exhibits promising retracing performance from both quantitative and qualitative perspectives.
- Score: 30.804429527783395
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The face swapping technique based on deepfake methods poses significant social risks to personal identity security. While numerous deepfake detection methods have been proposed as countermeasures against malicious face swapping, they can only output binary labels (Fake/Real) for distinguishing fake content without reliable and traceable evidence. To achieve visual forensics and target face attribution, we propose a novel task named face retracing, which considers retracing the original target face from the given fake one via inverse mapping. Toward this goal, we propose an IDRetracor that can retrace arbitrary original target identities from fake faces generated by multiple face swapping methods. Specifically, we first adopt a mapping resolver to perceive the possible solution space of the original target face for the inverse mappings. Then, we propose mapping-aware convolutions to retrace the original target face from the fake one. Such convolutions contain multiple kernels that can be combined under the control of the mapping resolver to tackle different face swapping mappings dynamically. Extensive experiments demonstrate that the IDRetracor exhibits promising retracing performance from both quantitative and qualitative perspectives.
Related papers
- OSDFace: One-Step Diffusion Model for Face Restoration [72.5045389847792]
Diffusion models have demonstrated impressive performance in face restoration.
We propose OSDFace, a novel one-step diffusion model for face restoration.
Results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics.
arXiv Detail & Related papers (2024-11-26T07:07:48Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - G2Face: High-Fidelity Reversible Face Anonymization via Generative and Geometric Priors [71.69161292330504]
Reversible face anonymization seeks to replace sensitive identity information in facial images with synthesized alternatives.
This paper introduces Gtextsuperscript2Face, which leverages both generative and geometric priors to enhance identity manipulation.
Our method outperforms existing state-of-the-art techniques in face anonymization and recovery, while preserving high data utility.
arXiv Detail & Related papers (2024-08-18T12:36:47Z) - Face Reconstruction Transfer Attack as Out-of-Distribution Generalization [15.258162177124317]
We aim to reconstruct face images which are capable of transferring face attacks on unseen encoders.
Inspired by its OOD nature, we propose to solve Face Reconstruction Transfer Attack (FRTA) by Averaged Latent Search and Unsupervised Validation with pseudo target (ALSUV)
arXiv Detail & Related papers (2024-07-02T16:21:44Z) - Recap: Detecting Deepfake Video with Unpredictable Tampered Traces via
Recovering Faces and Mapping Recovered Faces [35.04806736119123]
We propose Recap, a novel Deepfake detection model that exposes unspecific facial part inconsistencies by recovering faces.
In the recovering stage, the model focuses on randomly masking regions of interest and reconstructing real faces without unpredictable tampered traces.
In the mapping stage, the output of the recovery phase serves as supervision to guide the facial mapping process.
arXiv Detail & Related papers (2023-08-19T06:18:11Z) - ReliableSwap: Boosting General Face Swapping Via Reliable Supervision [9.725105108879717]
This paper proposes to construct reliable supervision, dubbed cycle triplets, which serves as the image-level guidance when the source identity differs from the target one during training.
Specifically, we use face reenactment and blending techniques to synthesize the swapped face from real images in advance.
Our face swapping framework, named ReliableSwap, can boost the performance of any existing face swapping network with negligible overhead.
arXiv Detail & Related papers (2023-06-08T17:01:14Z) - Semantic-aware One-shot Face Re-enactment with Dense Correspondence
Estimation [100.60938767993088]
One-shot face re-enactment is a challenging task due to the identity mismatch between source and driving faces.
This paper proposes to use 3D Morphable Model (3DMM) for explicit facial semantic decomposition and identity disentanglement.
arXiv Detail & Related papers (2022-11-23T03:02:34Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack [3.3707422585608953]
Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
arXiv Detail & Related papers (2020-08-23T03:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.