High-Robustness, Low-Transferability Fingerprinting of Neural Networks
- URL: http://arxiv.org/abs/2105.07078v1
- Date: Fri, 14 May 2021 21:48:23 GMT
- Title: High-Robustness, Low-Transferability Fingerprinting of Neural Networks
- Authors: Siyue Wang, Xiao Wang, Pin-Yu Chen, Pu Zhao and Xue Lin
- Abstract summary: This paper proposes Characteristic Examples for effectively fingerprinting deep neural networks.
It features high-robustness to the base model against model pruning as well as low-transferability to unassociated models.
- Score: 78.2527498858308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes Characteristic Examples for effectively fingerprinting
deep neural networks, featuring high-robustness to the base model against model
pruning as well as low-transferability to unassociated models. This is the
first work taking both robustness and transferability into consideration for
generating realistic fingerprints, whereas current methods lack practical
assumptions and may incur large false positive rates. To achieve better
trade-off between robustness and transferability, we propose three kinds of
characteristic examples: vanilla C-examples, RC-examples, and LTRC-example, to
derive fingerprints from the original base model. To fairly characterize the
trade-off between robustness and transferability, we propose Uniqueness Score,
a comprehensive metric that measures the difference between robustness and
transferability, which also serves as an indicator to the false alarm problem.
Related papers
- The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Improving Adversarial Robustness to Sensitivity and Invariance Attacks
with Deep Metric Learning [80.21709045433096]
A standard method in adversarial robustness assumes a framework to defend against samples crafted by minimally perturbing a sample.
We use metric learning to frame adversarial regularization as an optimal transport problem.
Our preliminary results indicate that regularizing over invariant perturbations in our framework improves both invariant and sensitivity defense.
arXiv Detail & Related papers (2022-11-04T13:54:02Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Robustness and Accuracy Could Be Reconcilable by (Proper) Definition [109.62614226793833]
The trade-off between robustness and accuracy has been widely studied in the adversarial literature.
We find that it may stem from the improperly defined robust error, which imposes an inductive bias of local invariance.
By definition, SCORE facilitates the reconciliation between robustness and accuracy, while still handling the worst-case uncertainty.
arXiv Detail & Related papers (2022-02-21T10:36:09Z) - TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity
and Model Smoothness [14.342349428248887]
Adversarial Transferability is an intriguing property of adversarial examples.
This paper theoretically analyzes sufficient conditions for transferability between models.
We propose a practical algorithm to reduce transferability within an ensemble to improve its robustness.
arXiv Detail & Related papers (2021-04-01T17:58:35Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Exploring the Vulnerability of Deep Neural Networks: A Study of
Parameter Corruption [40.76024057426747]
We propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption.
For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials.
arXiv Detail & Related papers (2020-06-10T02:29:28Z) - Defense Through Diverse Directions [24.129270094757587]
We develop a novel Bayesian neural network methodology to achieve strong adversarial robustness.
We demonstrate that by encouraging the network to distribute evenly across inputs, the network becomes less susceptible to localized, brittle features.
We show empirical robustness on several benchmark datasets.
arXiv Detail & Related papers (2020-03-24T01:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.