Automatically Lock Your Neural Networks When You're Away
- URL: http://arxiv.org/abs/2103.08472v1
- Date: Mon, 15 Mar 2021 15:47:54 GMT
- Title: Automatically Lock Your Neural Networks When You're Away
- Authors: Ge Ren, Jun Wu, Gaolei Li, Shenghong Li
- Abstract summary: We propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control.
Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network.
- Score: 5.153873824423363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The smartphone and laptop can be unlocked by face or fingerprint recognition,
while neural networks which confront numerous requests every day have little
capability to distinguish between untrustworthy and credible users. It makes
model risky to be traded as a commodity. Existed research either focuses on the
intellectual property rights ownership of the commercialized model, or traces
the source of the leak after pirated models appear. Nevertheless, active
identifying users legitimacy before predicting output has not been considered
yet. In this paper, we propose Model-Lock (M-LOCK) to realize an end-to-end
neural network with local dynamic access control, which is similar to the
automatic locking function of the smartphone to prevent malicious attackers
from obtaining available performance actively when you are away. Three kinds of
model training strategy are essential to achieve the tremendous performance
divergence between certified and suspect input in one neural network. Extensive
experiments based on MNIST, FashionMNIST, CIFAR10, CIFAR100, SVHN and GTSRB
datasets demonstrated the feasibility and effectiveness of the proposed scheme.
Related papers
- TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep
Neural Networks [3.489779105594534]
We introduce a novel approach to backdoor detection using two tensor decomposition methods applied to network activations.
This has a number of advantages relative to existing detection methods, including the ability to analyze multiple models at the same time.
Results show that our method detects backdoored networks more accurately and efficiently than current state-of-the-art methods.
arXiv Detail & Related papers (2024-01-06T03:08:28Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Neural network fragile watermarking with no model performance
degradation [28.68910526223425]
We propose a novel neural network fragile watermarking with no model performance degradation.
Experiments show that the proposed method can effectively detect model malicious fine-tuning with no model performance degradation.
arXiv Detail & Related papers (2022-08-16T07:55:20Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models [13.043683635373213]
Deep neural networks (DNNs) have achieved tremendous success in artificial intelligence (AI) fields.
DNN models can be easily illegally copied, redistributed, or abused by criminals.
arXiv Detail & Related papers (2022-06-06T12:12:47Z) - Fingerprinting Multi-exit Deep Neural Network Models via Inference Time [18.12409619358209]
We propose a novel approach to fingerprint multi-exit models via inference time rather than inference predictions.
Specifically, we design an effective method to generate a set of fingerprint samples to craft the inference process with a unique and robust inference time cost.
arXiv Detail & Related papers (2021-10-07T04:04:01Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Generating Probabilistic Safety Guarantees for Neural Network
Controllers [30.34898838361206]
We use a dynamics model to determine the output properties that must hold for a neural network controller to operate safely.
We develop an adaptive verification approach to efficiently generate an overapproximation of the neural network policy.
We show that our method is able to generate meaningful probabilistic safety guarantees for aircraft collision avoidance neural networks.
arXiv Detail & Related papers (2021-03-01T18:48:21Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.