PROVES: Establishing Image Provenance using Semantic Signatures
- URL: http://arxiv.org/abs/2110.11411v1
- Date: Thu, 21 Oct 2021 18:30:09 GMT
- Title: PROVES: Establishing Image Provenance using Semantic Signatures
- Authors: Mingyang Xie, Manav Kulshrestha, Shaojie Wang, Jinghan Yang, Ayan
Chakrabarti, Ning Zhang, and Yevgeniy Vorobeychik
- Abstract summary: We propose a novel architecture for preserving the provenance of semantic information in images.
We apply this architecture to verifying two types of semantic information: individual identities (faces) and whether the photo was taken indoors or outdoors.
- Score: 36.35727952091869
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern AI tools, such as generative adversarial networks, have transformed
our ability to create and modify visual data with photorealistic results.
However, one of the deleterious side-effects of these advances is the emergence
of nefarious uses in manipulating information in visual data, such as through
the use of deep fakes. We propose a novel architecture for preserving the
provenance of semantic information in images to make them less susceptible to
deep fake attacks. Our architecture includes semantic signing and verification
steps. We apply this architecture to verifying two types of semantic
information: individual identities (faces) and whether the photo was taken
indoors or outdoors. Verification accounts for a collection of common image
transformation, such as translation, scaling, cropping, and small rotations,
and rejects adversarial transformations, such as adversarially perturbed or, in
the case of face verification, swapped faces. Experiments demonstrate that in
the case of provenance of faces in an image, our approach is robust to
black-box adversarial transformations (which are rejected) as well as benign
transformations (which are accepted), with few false negatives and false
positives. Background verification, on the other hand, is susceptible to
black-box adversarial examples, but becomes significantly more robust after
adversarial training.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Hierarchical Generative Network for Face Morphing Attacks [7.34597796509503]
Face morphing attacks circumvent face recognition systems (FRSs) by creating a morphed image that contains multiple identities.
We propose a novel morphing attack method to improve the quality of morphed images and better preserve the contributing identities.
arXiv Detail & Related papers (2024-03-17T06:09:27Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Learning Disentangled Representation for One-shot Progressive Face
Swapping [65.98684203654908]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our results show that our method achieves state-of-the-art results on benchmark with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Adversarial Defense by Latent Style Transformations [20.78877614953599]
We investigate an attack-agnostic defense against adversarial attacks on high-resolution images by detecting suspicious inputs.
The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations.
arXiv Detail & Related papers (2020-06-17T07:56:36Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.