Noise Modeling, Synthesis and Classification for Generic Object
Anti-Spoofing
- URL: http://arxiv.org/abs/2003.13043v2
- Date: Tue, 31 Mar 2020 16:15:59 GMT
- Title: Noise Modeling, Synthesis and Classification for Generic Object
Anti-Spoofing
- Authors: Joel Stehouwer, Amin Jourabloo, Yaojie Liu, Xiaoming Liu
- Abstract summary: We tackle the problem of Generic Object Anti-Spoofing (GOAS) for the first time.
One significant cue to detect these attacks is the noise patterns introduced by the capture sensors and spoof mediums.
We propose a GAN-based architecture to synthesize and identify the noise patterns from seen and unseen medium/sensor combinations.
- Score: 26.530310468430038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using printed photograph and replaying videos of biometric modalities, such
as iris, fingerprint and face, are common attacks to fool the recognition
systems for granting access as the genuine user. With the growing online
person-to-person shopping (e.g., Ebay and Craigslist), such attacks also
threaten those services, where the online photo illustration might not be
captured from real items but from paper or digital screen. Thus, the study of
anti-spoofing should be extended from modality-specific solutions to
generic-object-based ones. In this work, we define and tackle the problem of
Generic Object Anti-Spoofing (GOAS) for the first time. One significant cue to
detect these attacks is the noise patterns introduced by the capture sensors
and spoof mediums. Different sensor/medium combinations can result in diverse
noise patterns. We propose a GAN-based architecture to synthesize and identify
the noise patterns from seen and unseen medium/sensor combinations. We show
that the procedure of synthesis and identification are mutually beneficial. We
further demonstrate the learned GOAS models can directly contribute to
modality-specific anti-spoofing without domain transfer. The code and GOSet
dataset are available at cvlab.cse.msu.edu/project-goas.html.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Evading Forensic Classifiers with Attribute-Conditioned Adversarial
Faces [6.105361899083232]
We show that it is possible to successfully generate adversarial fake faces with a specified set of attributes.
We propose a framework to search for adversarial latent codes within the feature space of StyleGAN.
We also propose a meta-learning based optimization strategy to achieve transferable performance on unknown target models.
arXiv Detail & Related papers (2023-06-22T17:59:55Z) - An Open Patch Generator based Fingerprint Presentation Attack Detection
using Generative Adversarial Network [3.5558308387389626]
Presentation Attack (PA) or spoofing is one of the threats caused by presenting a spoof of a genuine fingerprint to the sensor of Automatic Fingerprint Recognition Systems (AFRS)
This paper proposes a CNN based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG)
An overall accuracy of 96.20%, 94.97%, and 92.90% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios.
arXiv Detail & Related papers (2023-06-06T10:52:06Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Securing Deep Generative Models with Universal Adversarial Signature [69.51685424016055]
Deep generative models pose threats to society due to their potential misuse.
In this paper, we propose to inject a universal adversarial signature into an arbitrary pre-trained generative model.
The proposed method is validated on the FFHQ and ImageNet datasets with various state-of-the-art generative models.
arXiv Detail & Related papers (2023-05-25T17:59:01Z) - Detection of Adversarial Physical Attacks in Time-Series Image Data [12.923271427789267]
We propose VisionGuard* (VG), which couples VG with majority-vote methods, to detect adversarial physical attacks in time-series image data.
This is motivated by autonomous systems applications where images are collected over time using onboard sensors for decision-making purposes.
We have evaluated VG* on videos of both clean and physically attacked traffic signs generated by a state-of-the-art robust physical attack.
arXiv Detail & Related papers (2023-04-27T02:08:13Z) - CamDiff: Camouflage Image Augmentation via Diffusion Model [83.35960536063857]
CamDiff is a novel approach to synthesize salient objects in camouflaged scenes.
We leverage the latent diffusion model to synthesize salient objects in camouflaged scenes.
Our approach enables flexible editing and efficient large-scale dataset generation at a low cost.
arXiv Detail & Related papers (2023-04-11T19:37:47Z) - Black-Box Attack against GAN-Generated Image Detector with Contrastive
Perturbation [0.4297070083645049]
We propose a new black-box attack method against GAN-generated image detectors.
A novel contrastive learning strategy is adopted to train the encoder-decoder network based anti-forensic model.
The proposed attack effectively reduces the accuracy of three state-of-the-art detectors on six popular GANs.
arXiv Detail & Related papers (2022-11-07T12:56:14Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.