On the Evaluation of User Privacy in Deep Neural Networks using Timing
Side Channel
- URL: http://arxiv.org/abs/2208.01113v3
- Date: Sat, 17 Feb 2024 10:43:11 GMT
- Title: On the Evaluation of User Privacy in Deep Neural Networks using Timing
Side Channel
- Authors: Shubhi Shukla, Manaar Alam, Sarani Bhattacharya, Debdeep Mukhopadhyay,
Pabitra Mitra
- Abstract summary: We identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in Deep Learning (DL) implementations.
We demonstrate a practical inference-time attack where an adversary with user privilege and hard-label blackbox access to an ML can exploit Class Leakage.
We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage.
- Score: 14.350301915592027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent Deep Learning (DL) advancements in solving complex real-world tasks
have led to its widespread adoption in practical applications. However, this
opportunity comes with significant underlying risks, as many of these models
rely on privacy-sensitive data for training in a variety of applications,
making them an overly-exposed threat surface for privacy violations.
Furthermore, the widespread use of cloud-based Machine-Learning-as-a-Service
(MLaaS) for its robust infrastructure support has broadened the threat surface
to include a variety of remote side-channel attacks. In this paper, we first
identify and report a novel data-dependent timing side-channel leakage (termed
Class Leakage) in DL implementations originating from non-constant time
branching operation in a widely used DL framework PyTorch. We further
demonstrate a practical inference-time attack where an adversary with user
privilege and hard-label black-box access to an MLaaS can exploit Class Leakage
to compromise the privacy of MLaaS users. DL models are vulnerable to
Membership Inference Attack (MIA), where an adversary's objective is to deduce
whether any particular data has been used while training the model. In this
paper, as a separate case study, we demonstrate that a DL model secured with
differential privacy (a popular countermeasure against MIA) is still vulnerable
to MIA against an adversary exploiting Class Leakage. We develop an
easy-to-implement countermeasure by making a constant-time branching operation
that alleviates the Class Leakage and also aids in mitigating MIA. We have
chosen two standard benchmarking image classification datasets, CIFAR-10 and
CIFAR-100 to train five state-of-the-art pre-trained DL models, over two
different computing environments having Intel Xeon and Intel i7 processors to
validate our approach.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - A Method to Facilitate Membership Inference Attacks in Deep Learning Models [5.724311218570013]
We demonstrate a new form of membership inference attack that is strictly more powerful than prior art.
Our attack empowers the adversary to reliably de-identify all the training samples.
We show that the models can effectively disguise the amplified membership leakage under common membership privacy auditing.
arXiv Detail & Related papers (2024-07-02T03:33:42Z) - Dullahan: Stealthy Backdoor Attack against Without-Label-Sharing Split Learning [29.842087372804905]
We propose a stealthy backdoor attack strategy tailored to the without-label-sharing split learning architecture.
Our SBAT achieves a higher level of attack stealthiness by refraining from modifying any intermediate parameters during training.
arXiv Detail & Related papers (2024-05-21T13:03:06Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning [13.131323206843733]
This paper proposes an adversary detection-deactivation method, which can limit and isolate the access of potential malicious participants.
A detailed protection analysis has been conducted on a Multiview CDL case, and results show that the protocol can effectively prevent harmful access by manner analysis.
arXiv Detail & Related papers (2023-10-21T06:45:18Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - DeepMAL -- Deep Learning Models for Malware Traffic Detection and
Classification [4.187494796512101]
We introduce DeepMAL, a DL model which is able to capture the underlying statistics of malicious traffic.
We show that DeepMAL can detect and classify malware flows with high accuracy, outperforming traditional, shallow-like models.
arXiv Detail & Related papers (2020-03-03T16:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.