PATROL: Privacy-Oriented Pruning for Collaborative Inference Against
Model Inversion Attacks
- URL: http://arxiv.org/abs/2307.10981v2
- Date: Mon, 13 Nov 2023 04:40:17 GMT
- Title: PATROL: Privacy-Oriented Pruning for Collaborative Inference Against
Model Inversion Attacks
- Authors: Shiwei Ding, Lan Zhang, Miao Pan, Xiaoyong Yuan
- Abstract summary: Collaborative inference is a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs)
Recent research indicates model inversion attacks (MIAs) can reconstruct input data from intermediate results, posing serious privacy concerns for collaborative inference.
This paper provides a viable solution, named PATROL, which develops privacy-oriented pruning to balance privacy, efficiency, and utility of collaborative inference.
- Score: 15.257413246220032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative inference has been a promising solution to enable
resource-constrained edge devices to perform inference using state-of-the-art
deep neural networks (DNNs). In collaborative inference, the edge device first
feeds the input to a partial DNN locally and then uploads the intermediate
result to the cloud to complete the inference. However, recent research
indicates model inversion attacks (MIAs) can reconstruct input data from
intermediate results, posing serious privacy concerns for collaborative
inference. Existing perturbation and cryptography techniques are inefficient
and unreliable in defending against MIAs while performing accurate inference.
This paper provides a viable solution, named PATROL, which develops
privacy-oriented pruning to balance privacy, efficiency, and utility of
collaborative inference. PATROL takes advantage of the fact that later layers
in a DNN can extract more task-specific features. Given limited local resources
for collaborative inference, PATROL intends to deploy more layers at the edge
based on pruning techniques to enforce task-specific features for inference and
reduce task-irrelevant but sensitive features for privacy preservation. To
achieve privacy-oriented pruning, PATROL introduces two key components:
Lipschitz regularization and adversarial reconstruction training, which
increase the reconstruction errors by reducing the stability of MIAs and
enhance the target inference model by adversarial training, respectively. On a
real-world collaborative inference task, vehicle re-identification, we
demonstrate the superior performance of PATROL in terms of against MIAs.
Related papers
- Prompt Inversion Attack against Collaborative Inference of Large Language Models [14.786666134508645]
We introduce the concept of prompt inversion attack (PIA), where a malicious participant intends to recover the input prompt through the activation transmitted by its previous participant.
Our method achieves an 88.4% token accuracy on the Skytrax dataset with the Llama-65B model when inverting the maximum number of transformer layers.
arXiv Detail & Related papers (2025-03-12T03:20:03Z) - Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems [89.35169042718739]
collaborative inference enables end users to leverage powerful deep learning models without exposure of sensitive raw data to cloud servers.
Recent studies have revealed that these intermediate features may not sufficiently preserve privacy, as information can be leaked and raw data can be reconstructed via model inversion attacks (MIAs)
This work first theoretically proves that the conditional entropy of inputs given intermediate features provides a guaranteed lower bound on the reconstruction mean square error (MSE) under any MIA.
Then, we derive a differentiable and solvable measure for bounding this conditional entropy based on the Gaussian mixture estimation and propose a conditional entropy algorithm to enhance the inversion robustness
arXiv Detail & Related papers (2025-03-01T07:15:21Z) - How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference [9.092229145160763]
Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.
There is no established criterion for assessing the difficulty of model inversion attacks (MIAs)
We propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - Edge-Only Universal Adversarial Attacks in Distributed Learning [49.546479320670464]
In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only.
Our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side.
Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part.
arXiv Detail & Related papers (2024-11-15T11:06:24Z) - Efficient Privacy-Preserving KAN Inference Using Homomorphic Encryption [9.0993556073886]
Homomorphic encryption (HE) facilitates privacy-preserving inference for deep learning models.
Complex structure of KANs, incorporating nonlinear elements like the SiLU activation function and B-spline functions, renders existing privacy-preserving inference techniques inadequate.
We propose an accurate and efficient privacy-preserving inference scheme tailored for KANs.
arXiv Detail & Related papers (2024-09-12T04:51:27Z) - Approximating Two-Layer ReLU Networks for Hidden State Analysis in Differential Privacy [3.8254443661593633]
We show that it is possible to privately train convex problems with privacy-utility trade-offs comparable to those of one hidden-layer ReLU networks trained with DP-SGD.
Our experiments on benchmark classification tasks show that NoisyCGD can achieve privacy-utility trade-offs comparable to DP-SGD applied to one-hidden-layer ReLU networks.
arXiv Detail & Related papers (2024-07-05T22:43:32Z) - HUWSOD: Holistic Self-training for Unified Weakly Supervised Object Detection [66.42229859018775]
We introduce a unified, high-capacity weakly supervised object detection (WSOD) network called HUWSOD.
HUWSOD incorporates a self-supervised proposal generator and an autoencoder proposal generator with a multi-rate re-supervised pyramid to replace traditional object proposals.
Our findings indicate that randomly boxes, although significantly different from well-designed offline object proposals, are effective for WSOD training.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Grad-FEC: Unequal Loss Protection of Deep Features in Collaborative
Intelligence [27.135997578218486]
Collaborative intelligence (CI) involves dividing an artificial intelligence (AI) model into two parts: front-end, to be deployed on an edge device, and back-end, to be deployed in the cloud.
The deep feature tensors produced by the front-end are transmitted to the cloud through a communication channel, which may be subject to packet loss.
We propose a novel approach to enhance the resilience of the CI system in the presence of packet loss through Unequal Loss Protection (ULP)
arXiv Detail & Related papers (2023-07-04T17:49:46Z) - Secure Split Learning against Property Inference, Data Reconstruction,
and Feature Space Hijacking Attacks [5.209316363034367]
Split learning of deep neural networks (SplitNN) has provided a promising solution to learning jointly for the mutual interest of a guest and a host.
SplitNN creates a new attack surface for the adversarial participant, holding back its practical use in the real world.
This paper investigates the adversarial effects of highly threatening attacks, including property inference, data reconstruction, and feature hijacking attacks.
We propose a new activation function named R3eLU, transferring private smashed data and partial loss into randomized responses.
arXiv Detail & Related papers (2023-04-19T09:08:23Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Privacy for Rescue: A New Testimony Why Privacy is Vulnerable In Deep
Models [6.902994369582068]
We present a formal definition of the privacy protection problem in the edge-cloud system running models.
We analyze the-state-of-the-art methods and point out the drawbacks of their methods.
We propose two new metrics that are more accurate to measure the effectiveness of privacy protection methods.
arXiv Detail & Related papers (2019-12-31T15:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.