Fingerprint Membership and Identity Inference Against Generative Adversarial Networks
- URL: http://arxiv.org/abs/2406.15253v1
- Date: Fri, 21 Jun 2024 15:43:47 GMT
- Title: Fingerprint Membership and Identity Inference Against Generative Adversarial Networks
- Authors: Saverio Cavasin, Daniele Mari, Simone Milani, Mauro Conti,
- Abstract summary: We design and test an identity inference attack on fingerprint datasets created by means of a generative adversarial network.
Experimental results show that the proposed solution proves to be effective under different configurations and easily extendable to other biometric measurements.
- Score: 19.292976022250684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models are gaining significant attention as potential catalysts for a novel industrial revolution. Since automated sample generation can be useful to solve privacy and data scarcity issues that usually affect learned biometric models, such technologies became widely spread in this field. In this paper, we assess the vulnerabilities of generative machine learning models concerning identity protection by designing and testing an identity inference attack on fingerprint datasets created by means of a generative adversarial network. Experimental results show that the proposed solution proves to be effective under different configurations and easily extendable to other biometric measurements.
Related papers
- Enhancing Network Intrusion Detection Performance using Generative Adversarial Networks [0.25163931116642785]
We propose a novel approach for enhancing the performance of an NIDS through the integration of Generative Adversarial Networks (GANs)
GANs generate synthetic network traffic data that closely mimics real-world network behavior.
Our findings show that the integration of GANs into NIDS can lead to enhancements in intrusion detection performance for attacks with limited training data.
arXiv Detail & Related papers (2024-04-11T04:01:15Z) - Deepfake Sentry: Harnessing Ensemble Intelligence for Resilient Detection and Generalisation [0.8796261172196743]
We propose a proactive and sustainable deepfake training augmentation solution.
We employ a pool of autoencoders that mimic the effect of the artefacts introduced by the deepfake generator models.
Experiments reveal that our proposed ensemble autoencoder-based data augmentation learning approach offers improvements in terms of generalisation.
arXiv Detail & Related papers (2024-03-29T19:09:08Z) - DiffFinger: Advancing Synthetic Fingerprint Generation through Denoising Diffusion Probabilistic Models [0.0]
This study explores the generation of synthesized fingerprint images using Denoising Diffusion Probabilistic Models (DDPMs)
Our results reveal that DiffFinger not only competes with authentic training set data in quality but also provides a richer set of biometric data, reflecting true-to-life variability.
arXiv Detail & Related papers (2024-03-15T14:34:29Z) - Generative Models are Self-Watermarked: Declaring Model Authentication
through Re-Generation [17.88043926057354]
verifying data ownership poses formidable challenges, particularly in cases of unauthorized reuse of generated data.
Our work is dedicated to detecting data reuse from even an individual sample.
We propose an explainable verification procedure that attributes data ownership through re-generation, and further amplifies these fingerprints in the generative models through iterative data re-generation.
arXiv Detail & Related papers (2024-02-23T10:48:21Z) - AttackNet: Enhancing Biometric Security via Tailored Convolutional Neural Network Architectures for Liveness Detection [20.821562115822182]
AttackNet is a bespoke Convolutional Neural Network architecture designed to combat spoofing threats in biometric systems.
It offers a layered defense mechanism, seamlessly transitioning from low-level feature extraction to high-level pattern discernment.
Benchmarking our model across diverse datasets affirms its prowess, showcasing superior performance metrics in comparison to contemporary models.
arXiv Detail & Related papers (2024-02-06T07:22:50Z) - Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification [67.64124512185087]
Soft biometrics such as gait are widely used with face in surveillance tasks like person recognition and re-identification.
We propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks.
arXiv Detail & Related papers (2023-03-24T05:28:35Z) - FedForgery: Generalized Face Forgery Detection with Residual Federated
Learning [87.746829550726]
Existing face forgery detection methods directly utilize the obtained public shared or centralized data for training.
The paper proposes a novel generalized residual Federated learning for face Forgery detection (FedForgery)
Experiments conducted on publicly available face forgery detection datasets prove the superior performance of the proposed FedForgery.
arXiv Detail & Related papers (2022-10-18T03:32:18Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.