Building an Invisible Shield for Your Portrait against Deepfakes
- URL: http://arxiv.org/abs/2305.12881v1
- Date: Mon, 22 May 2023 10:01:28 GMT
- Title: Building an Invisible Shield for Your Portrait against Deepfakes
- Authors: Jiazhi Guan, Tianshu Hu, Hang Zhou, Zhizhi Guo, Lirui Deng, Chengbin
Quan, Errui Ding, Youjian Zhao
- Abstract summary: We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
- Score: 34.65356811439098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The issue of detecting deepfakes has garnered significant attention in the
research community, with the goal of identifying facial manipulations for abuse
prevention. Although recent studies have focused on developing generalized
models that can detect various types of deepfakes, their performance is not
always be reliable and stable, which poses limitations in real-world
applications. Instead of learning a forgery detector, in this paper, we propose
a novel framework - Integrity Encryptor, aiming to protect portraits in a
proactive strategy. Our methodology involves covertly encoding messages that
are closely associated with key facial attributes into authentic images prior
to their public release. Unlike authentic images, where the hidden messages can
be extracted with precision, manipulating the facial attributes through
deepfake techniques can disrupt the decoding process. Consequently, the
modified facial attributes serve as a mean of detecting manipulated images
through a comparison of the decoded messages. Our encryption approach is
characterized by its brevity and efficiency, and the resulting method exhibits
a good robustness against typical image processing traces, such as image
degradation and noise. When compared to baselines that struggle to detect
deepfakes in a black-box setting, our method utilizing conditional encryption
showcases superior performance when presented with a range of different types
of forgeries. In experiments conducted on our protected data, our approach
outperforms existing state-of-the-art methods by a significant margin.
Related papers
- Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - DeepBlur: A Simple and Effective Method for Natural Image Obfuscation [4.80165284612342]
We present DeepBlur, a simple yet effective method for image obfuscation by blurring in the latent space of an unconditionally pre-trained generative model.
We compare it with existing methods by efficiency and image quality, and evaluate against both state-of-the-art deep learning models and industrial products.
arXiv Detail & Related papers (2021-03-31T19:31:26Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.