Auditing Membership Leakages of Multi-Exit Networks
- URL: http://arxiv.org/abs/2208.11180v1
- Date: Tue, 23 Aug 2022 20:16:19 GMT
- Title: Auditing Membership Leakages of Multi-Exit Networks
- Authors: Zheng Li, Yiyong Liu, Xinlei He, Ning Yu, Michael Backes and Yang
Zhang
- Abstract summary: We perform the first privacy analysis of multi-exit networks through the lens of membership leakages.
We propose a hybrid attack that exploits the exit information to improve the performance of existing attacks.
We present a defense mechanism called TimeGuard specifically for multi-exit networks and show that TimeGuard mitigates the newly proposed attacks perfectly.
- Score: 24.993435293184323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Relying on the fact that not all inputs require the same amount of
computation to yield a confident prediction, multi-exit networks are gaining
attention as a prominent approach for pushing the limits of efficient
deployment. Multi-exit networks endow a backbone model with early exits,
allowing to obtain predictions at intermediate layers of the model and thus
save computation time and/or energy. However, current various designs of
multi-exit networks are only considered to achieve the best trade-off between
resource usage efficiency and prediction accuracy, the privacy risks stemming
from them have never been explored. This prompts the need for a comprehensive
investigation of privacy risks in multi-exit networks.
In this paper, we perform the first privacy analysis of multi-exit networks
through the lens of membership leakages. In particular, we first leverage the
existing attack methodologies to quantify the multi-exit networks'
vulnerability to membership leakages. Our experimental results show that
multi-exit networks are less vulnerable to membership leakages and the exit
(number and depth) attached to the backbone model is highly correlated with the
attack performance. Furthermore, we propose a hybrid attack that exploits the
exit information to improve the performance of existing attacks. We evaluate
membership leakage threat caused by our hybrid attack under three different
adversarial setups, ultimately arriving at a model-free and data-free
adversary. These results clearly demonstrate that our hybrid attacks are very
broadly applicable, thereby the corresponding risks are much more severe than
shown by existing membership inference attacks. We further present a defense
mechanism called TimeGuard specifically for multi-exit networks and show that
TimeGuard mitigates the newly proposed attacks perfectly.
Related papers
- Membership Inference Attacks Against In-Context Learning [26.57639819629732]
We present the first membership inference attack tailored for In-Context Learning (ICL)
We propose four attack strategies tailored to various constrained scenarios.
We investigate three potential defenses targeting data, instruction, and output.
arXiv Detail & Related papers (2024-09-02T17:23:23Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning [6.384138583754105]
Federated learning (FL) has been introduced to enable a large number of clients, possibly mobile devices, to collaborate on generating a generalized machine learning model.
Due to the participation of a large number of clients, it is often difficult to profile and verify each client, which leads to a security threat.
We introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time.
arXiv Detail & Related papers (2024-04-09T11:42:32Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - On The Vulnerability of Recurrent Neural Networks to Membership
Inference Attacks [20.59493611017851]
We study the privacy implications of deploying recurrent neural networks in machine learning.
We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent.
arXiv Detail & Related papers (2021-10-06T20:20:35Z) - DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
Estimation [109.11580756757611]
Deep ensembles perform better than a single network thanks to the diversity among their members.
Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members' performances.
We introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features.
arXiv Detail & Related papers (2021-01-14T10:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.