How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference
- URL: http://arxiv.org/abs/2501.00824v4
- Date: Fri, 18 Apr 2025 02:06:06 GMT
- Title: How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference
- Authors: Rongke Liu,
- Abstract summary: Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.<n>There is no established criterion for assessing the difficulty of model inversion attacks (MIAs)<n>We propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors.
- Score: 9.092229145160763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative inference (CI) improves computational efficiency for edge devices by transmitting intermediate features to cloud models. However, this process inevitably exposes feature representations to model inversion attacks (MIAs), enabling unauthorized data reconstruction. Despite extensive research, there is no established criterion for assessing the difficulty of MIA implementation, leaving a fundamental question unanswered: \textit{What factors truly and verifiably determine the attack's success in CI?} Moreover, existing defenses lack the theoretical foundation described above, making it challenging to regulate feature information effectively while ensuring privacy and minimizing computational overhead. These shortcomings introduce three key challenges: theoretical gap, methodological limitation, and practical constraint. To overcome these challenges, we propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors. The validity of this criterion is demonstrated by using the mutual information neural estimator. Building on this insight, we propose SiftFunnel, a privacy-preserving framework to resist MIA while maintaining usability. Specifically, we incorporate linear and non-linear correlation constraints alongside label smoothing to suppress redundant information transmission, effectively balancing privacy and usability. To enhance deployability, the edge model adopts a funnel-shaped structure with attention mechanisms, strengthening privacy while reducing computational and storage burdens. Experiments show that, compared to state-of-the-art defense, SiftFunnel increases reconstruction error by $\sim$30\%, lowers mutual and effective information metrics by $\geq$50\%, and reduces edge burdens by almost $20\times$, while maintaining comparable usability.
Related papers
- Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems [89.35169042718739]
collaborative inference enables end users to leverage powerful deep learning models without exposure of sensitive raw data to cloud servers.
Recent studies have revealed that these intermediate features may not sufficiently preserve privacy, as information can be leaked and raw data can be reconstructed via model inversion attacks (MIAs)
This work first theoretically proves that the conditional entropy of inputs given intermediate features provides a guaranteed lower bound on the reconstruction mean square error (MSE) under any MIA.
Then, we derive a differentiable and solvable measure for bounding this conditional entropy based on the Gaussian mixture estimation and propose a conditional entropy algorithm to enhance the inversion robustness
arXiv Detail & Related papers (2025-03-01T07:15:21Z) - Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters [99.90150979732641]
We propose an innovative scheme based on adversarial attack in this paper.<n>The scheme effectively prevents NILM models from violating appliance-level privacy, while also ensuring accurate billing calculation for users.<n>Our solutions exhibit transferability, making the generated perturbation signal from one target model applicable to other diverse NILM models.
arXiv Detail & Related papers (2024-12-22T07:06:46Z) - SUMI-IFL: An Information-Theoretic Framework for Image Forgery Localization with Sufficiency and Minimality Constraints [66.85363924364628]
Image forgery localization (IFL) is a crucial technique for preventing tampered image misuse and protecting social safety.<n>We introduce a novel information-theoretic IFL framework named SUMI-IFL that imposes sufficiency-view and minimality-view constraints on forgery feature representation.
arXiv Detail & Related papers (2024-12-13T09:08:02Z) - Fed-Credit: Robust Federated Learning with Credibility Management [18.349127735378048]
Federated Learning (FL) is an emerging machine learning approach enabling model training on decentralized devices or data sources.
We propose a robust FL approach based on the credibility management scheme, called Fed-Credit.
The results exhibit superior accuracy and resilience against adversarial attacks, all while maintaining comparatively low computational complexity.
arXiv Detail & Related papers (2024-05-20T03:35:13Z) - Center-Based Relaxed Learning Against Membership Inference Attacks [3.301728339780329]
We propose a new architecture-agnostic training paradigm called center-based relaxed learning (CRL)
CRL is adaptive to any classification model and provides privacy preservation by sacrificing a minimal or no loss of model generalizability.
We empirically show that this approach exhibits comparable performance without requiring additional model capacity or data costs.
arXiv Detail & Related papers (2024-04-26T19:41:08Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - PATROL: Privacy-Oriented Pruning for Collaborative Inference Against
Model Inversion Attacks [15.257413246220032]
Collaborative inference is a promising solution to enable resource-constrained edge devices to perform inference using state-of-the-art deep neural networks (DNNs)
Recent research indicates model inversion attacks (MIAs) can reconstruct input data from intermediate results, posing serious privacy concerns for collaborative inference.
This paper provides a viable solution, named PATROL, which develops privacy-oriented pruning to balance privacy, efficiency, and utility of collaborative inference.
arXiv Detail & Related papers (2023-07-20T16:09:07Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - FeDiSa: A Semi-asynchronous Federated Learning Framework for Power
System Fault and Cyberattack Discrimination [1.0621485365427565]
This paper proposes FeDiSa, a novel Semi-asynchronous Federated learning framework for power system faults and cyberattack Discrimination.
Experiments on the proposed framework using publicly available industrial control systems datasets reveal superior attack detection accuracy whilst preserving data confidentiality and minimizing the adverse effects of communication latency and stragglers.
arXiv Detail & Related papers (2023-03-28T13:34:38Z) - FedDef: Defense Against Gradient Leakage in Federated Learning-based
Network Intrusion Detection Systems [15.39058389031301]
We propose two privacy evaluation metrics designed for FL-based NIDSs.
We propose FedDef, a novel optimization-based input perturbation defense strategy with theoretical guarantee.
We experimentally evaluate four existing defenses on four datasets and show that our defense outperforms all the baselines in terms of privacy protection.
arXiv Detail & Related papers (2022-10-08T15:23:30Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.