FBI: Fingerprinting models with Benign Inputs
- URL: http://arxiv.org/abs/2208.03169v1
- Date: Fri, 5 Aug 2022 13:55:36 GMT
- Title: FBI: Fingerprinting models with Benign Inputs
- Authors: Thibault Maho, Teddy Furon, Erwan Le Merrer
- Abstract summary: This paper tackles the challenges to propose i) fingerprinting schemes that are resilient to significant modifications of the models, by generalizing to the notion of model families and their variants.
We achieve both goals by demonstrating that benign inputs, that are unmodified images, are sufficient material for both tasks.
Both approaches are experimentally validated over an unprecedented set of more than 1,000 networks.
- Score: 17.323638042215013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in the fingerprinting of deep neural networks detect
instances of models, placed in a black-box interaction scheme. Inputs used by
the fingerprinting protocols are specifically crafted for each precise model to
be checked for. While efficient in such a scenario, this nevertheless results
in a lack of guarantee after a mere modification (like retraining,
quantization) of a model. This paper tackles the challenges to propose i)
fingerprinting schemes that are resilient to significant modifications of the
models, by generalizing to the notion of model families and their variants, ii)
an extension of the fingerprinting task encompassing scenarios where one wants
to fingerprint not only a precise model (previously referred to as a detection
task) but also to identify which model family is in the black-box
(identification task). We achieve both goals by demonstrating that benign
inputs, that are unmodified images, for instance, are sufficient material for
both tasks. We leverage an information-theoretic scheme for the identification
task. We devise a greedy discrimination algorithm for the detection task. Both
approaches are experimentally validated over an unprecedented set of more than
1,000 networks.
Related papers
- Neural Fingerprints for Adversarial Attack Detection [2.7309692684728613]
A well known vulnerability of deep learning models is their susceptibility to adversarial examples.
Many algorithms have been proposed to address this problem, falling generally into one of two categories.
We argue that in a white-box setting, where the attacker knows the configuration and weights of the network and the detector, they can overcome the detector.
This problem is common in security applications where even a very good model is not sufficient to ensure safety.
arXiv Detail & Related papers (2024-11-07T08:43:42Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - Efficient Verification-Based Face Identification [50.616875565173274]
We study the problem of performing face verification with an efficient neural model $f$.
Our model leads to a substantially small $f$ requiring only 23k parameters and 5M floating point operations (FLOPS)
We use six face verification datasets to demonstrate that our method is on par or better than state-of-the-art models.
arXiv Detail & Related papers (2023-12-20T18:08:02Z) - Robust Retraining-free GAN Fingerprinting via Personalized Normalization [21.63902009635896]
The proposed method can embed different fingerprints inside the GAN by just changing the input of the ParamGen Nets.
The performance of the proposed method in terms of robustness against both model-level and image-level attacks is superior to the state-of-the-art.
arXiv Detail & Related papers (2023-11-09T16:09:12Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - FIGO: Enhanced Fingerprint Identification Approach Using GAN and One
Shot Learning Techniques [0.0]
We propose a Fingerprint Identification approach based on Generative adversarial network and One-shot learning techniques.
First, we propose a Pix2Pix model to transform low-quality fingerprint images to a higher level of fingerprint images pixel by pixel directly in the fingerprint enhancement tier.
Second, we construct a fully automated fingerprint feature extraction model using a one-shot learning approach to differentiate each fingerprint from the others in the fingerprint identification process.
arXiv Detail & Related papers (2022-08-11T02:45:42Z) - Pair-Relationship Modeling for Latent Fingerprint Recognition [25.435974669629374]
We propose a new scheme that can model the pair-relationship of two fingerprints directly as the similarity feature for recognition.
Experimental results on two databases show that the proposed method outperforms the state of the art.
arXiv Detail & Related papers (2022-07-02T11:31:31Z) - FDeblur-GAN: Fingerprint Deblurring using Generative Adversarial Network [22.146795282680667]
We propose a fingerprint deblurring model FDe-GAN, based on the conditional Generative Adversarial Networks (cGANs) and multi-stage framework of the stack GAN.
We integrate two auxiliary sub-networks into the model for the deblurring task.
We achieve an accuracy of 95.18% on our fingerprint database for the task of matching deblurred and ground truth fingerprints.
arXiv Detail & Related papers (2021-06-21T18:37:20Z) - Are Pretrained Transformers Robust in Intent Classification? A Missing
Ingredient in Evaluation of Out-of-Scope Intent Detection [93.40525251094071]
We first point out the importance of in-domain out-of-scope detection in few-shot intent recognition tasks.
We then illustrate the vulnerability of pretrained Transformer-based models against samples that are in-domain but out-of-scope (ID-OOS)
arXiv Detail & Related papers (2021-06-08T17:51:12Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.