BLADERUNNER: Rapid Countermeasure for Synthetic (AI-Generated) StyleGAN
Faces
- URL: http://arxiv.org/abs/2210.06587v2
- Date: Fri, 14 Oct 2022 01:06:20 GMT
- Title: BLADERUNNER: Rapid Countermeasure for Synthetic (AI-Generated) StyleGAN
Faces
- Authors: Adam Dorian Wong
- Abstract summary: StyleGAN is the open-sourced implementation made by NVIDIA.
Report surveys the relevance of AI/ML with respect to Cyber & Information Operations.
Project Blade Runner encompasses two scripts necessary to counter StyleGAN images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: StyleGAN is the open-sourced TensorFlow implementation made by NVIDIA. It has
revolutionized high quality facial image generation. However, this
democratization of Artificial Intelligence / Machine Learning (AI/ML)
algorithms has enabled hostile threat actors to establish cyber personas or
sock-puppet accounts in social media platforms. These ultra-realistic synthetic
faces. This report surveys the relevance of AI/ML with respect to Cyber &
Information Operations. The proliferation of AI/ML algorithms has led to a rise
in DeepFakes and inauthentic social media accounts. Threats are analyzed within
the Strategic and Operational Environments. Existing methods of identifying
synthetic faces exists, but they rely on human beings to visually scrutinize
each photo for inconsistencies. However, through use of the DLIB 68-landmark
pre-trained file, it is possible to analyze and detect synthetic faces by
exploiting repetitive behaviors in StyleGAN images. Project Blade Runner
encompasses two scripts necessary to counter StyleGAN images. Through
PapersPlease acting as the analyzer, it is possible to derive
indicators-of-attack (IOA) from scraped image samples. These IOAs can be fed
back into Among_Us acting as the detector to identify synthetic faces from live
operational samples. The opensource copy of Blade Runner may lack additional
unit tests and some functionality, but the open-source copy is a redacted
version, far leaner, better optimized, and a proof-of-concept for the
information security community. The desired end-state will be to incrementally
add automation to stay on-par with its closed-source predecessor.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Evading Forensic Classifiers with Attribute-Conditioned Adversarial
Faces [6.105361899083232]
We show that it is possible to successfully generate adversarial fake faces with a specified set of attributes.
We propose a framework to search for adversarial latent codes within the feature space of StyleGAN.
We also propose a meta-learning based optimization strategy to achieve transferable performance on unknown target models.
arXiv Detail & Related papers (2023-06-22T17:59:55Z) - Open-Eye: An Open Platform to Study Human Performance on Identifying
AI-Synthesized Faces [51.56417104929796]
We develop an online platform called Open-eye to study the human performance of AI-synthesized faces detection.
We describe the design and workflow of the Open-eye in this paper.
arXiv Detail & Related papers (2022-05-13T14:30:59Z) - TAGPerson: A Target-Aware Generation Pipeline for Person
Re-identification [65.60874203262375]
We propose a novel Target-Aware Generation pipeline to produce synthetic person images, called TAGPerson.
Specifically, it involves a parameterized rendering method, where the parameters are controllable and can be adjusted according to target scenes.
In our experiments, our target-aware synthetic images can achieve a much higher performance than the generalized synthetic images on MSMT17, i.e. 47.5% vs. 40.9% for rank-1 accuracy.
arXiv Detail & Related papers (2021-12-28T17:56:19Z) - Synthetic Periocular Iris PAI from a Small Set of Near-Infrared-Images [10.337140740056725]
This paper proposes a novel PAI synthetically created (SPI-PAI) using four state-of-the-art GAN algorithms.
The best PAD algorithm reported by the LivDet-2020 competition was tested for us using the synthetic PAI.
Results demonstrated the feasibility of synthetic images to fool presentation attacks detection algorithms.
arXiv Detail & Related papers (2021-07-26T08:07:49Z) - OGAN: Disrupting Deepfakes with an Adversarial Attack that Survives
Training [0.0]
We introduce a class of adversarial attacks that can disrupt face-swapping autoencoders.
We propose the Oscillating GAN (OGAN) attack, a novel attack optimized to be training-resistant.
These results demonstrate the existence of training-resistant adversarial attacks, potentially applicable to a wide range of domains.
arXiv Detail & Related papers (2020-06-17T17:18:29Z) - Defending against GAN-based Deepfake Attacks via Transformation-aware
Adversarial Faces [36.87244915810356]
Deepfake represents a category of face-swapping attacks that leverage machine learning models.
We propose to use novel transformation-aware adversarially perturbed faces as a defense against Deepfake attacks.
We also propose to use an ensemble-based approach to enhance the defense robustness against GAN-based Deepfake variants.
arXiv Detail & Related papers (2020-06-12T18:51:57Z) - Noise Modeling, Synthesis and Classification for Generic Object
Anti-Spoofing [26.530310468430038]
We tackle the problem of Generic Object Anti-Spoofing (GOAS) for the first time.
One significant cue to detect these attacks is the noise patterns introduced by the capture sensors and spoof mediums.
We propose a GAN-based architecture to synthesize and identify the noise patterns from seen and unseen medium/sensor combinations.
arXiv Detail & Related papers (2020-03-29T14:52:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.