Against Membership Inference Attack: Pruning is All You Need
- URL: http://arxiv.org/abs/2008.13578v4
- Date: Sun, 4 Jul 2021 13:49:31 GMT
- Title: Against Membership Inference Attack: Pruning is All You Need
- Authors: Yijue Wang, Chenghong Wang, Zigeng Wang, Shanglin Zhou, Hang Liu,
Jinbo Bi, Caiwen Ding, Sanguthevar Rajasekaran
- Abstract summary: We envision that the weight pruning technique will help deep learning networks (DNNs) against membership inference attack (MIA)
In this work, we propose a pruning algorithm, and we show that the proposed algorithm can find a subnetwork that can prevent privacy leakage from MIA.
Our experimental results illustrate that the attack accuracy using model compression is up to 13.6% and 10% lower than that of the baseline and Min-Max game, accordingly.
- Score: 22.13027338667513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The large model size, high computational operations, and vulnerability
against membership inference attack (MIA) have impeded deep learning or deep
neural networks (DNNs) popularity, especially on mobile devices. To address the
challenge, we envision that the weight pruning technique will help DNNs against
MIA while reducing model storage and computational operation. In this work, we
propose a pruning algorithm, and we show that the proposed algorithm can find a
subnetwork that can prevent privacy leakage from MIA and achieves competitive
accuracy with the original DNNs. We also verify our theoretical insights with
experiments. Our experimental results illustrate that the attack accuracy using
model compression is up to 13.6% and 10% lower than that of the baseline and
Min-Max game, accordingly.
Related papers
- Augmented Neural Fine-Tuning for Efficient Backdoor Purification [16.74156528484354]
Recent studies have revealed the vulnerability of deep neural networks (DNNs) to various backdoor attacks.
We propose Neural mask Fine-Tuning (NFT) with an aim to optimally re-organize the neuron activities.
NFT relaxes the trigger synthesis process and eliminates the requirement of the adversarial search module.
arXiv Detail & Related papers (2024-07-14T02:36:54Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - Partially Oblivious Neural Network Inference [4.843820624525483]
We show that for neural network models, like CNNs, some information leakage can be acceptable.
We experimentally demonstrate that in a CIFAR-10 network we can leak up to $80%$ of the model's weights with practically no security impact.
arXiv Detail & Related papers (2022-10-27T05:39:36Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Efficient and Accurate Conversion of Spiking Neural Network with Burst
Spikes [9.210531698373256]
Spiking neural network (SNN) as a brain-inspired energy-efficient neural network has attracted the interest of researchers.
One effective way is to map the weight of trained ANN to SNN to achieve high reasoning ability.
The converted spiking neural network often suffers from performance degradation and a considerable time delay.
We propose a neuron model for releasing burst spikes, a cheap but highly efficient method to solve residual information.
arXiv Detail & Related papers (2022-04-28T03:48:17Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.