Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning
- URL: http://arxiv.org/abs/2508.11907v1
- Date: Sat, 16 Aug 2025 04:39:16 GMT
- Title: Deciphering the Interplay between Attack and Protection Complexity in Privacy-Preserving Federated Learning
- Authors: Xiaojin Zhang, Mingcong Xu, Yiming Li, Wei Chen, Qiang Yang,
- Abstract summary: Federated learning (FL) offers a promising paradigm for collaborative model training while preserving data privacy.<n>"Attack Complexity" is the minimum computational and data resources an adversary requires to reconstruct private data.<n>"Protection Complexity" is the expected distortion introduced by privacy mechanisms.
- Score: 17.040727625306083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) offers a promising paradigm for collaborative model training while preserving data privacy. However, its susceptibility to gradient inversion attacks poses a significant challenge, necessitating robust privacy protection mechanisms. This paper introduces a novel theoretical framework to decipher the intricate interplay between attack and protection complexities in privacy-preserving FL. We formally define "Attack Complexity" as the minimum computational and data resources an adversary requires to reconstruct private data below a given error threshold, and "Protection Complexity" as the expected distortion introduced by privacy mechanisms. Leveraging Maximum Bayesian Privacy (MBP), we derive tight theoretical bounds for protection complexity, demonstrating its scaling with model dimensionality and privacy budget. Furthermore, we establish comprehensive bounds for attack complexity, revealing its dependence on privacy leakage, gradient distortion, model dimension, and the chosen privacy level. Our findings quantitatively illuminate the fundamental trade-offs between privacy guarantees, system utility, and the effort required for both attacking and defending. This framework provides critical insights for designing more secure and efficient federated learning systems.
Related papers
- Coding-Enforced Resilient and Secure Aggregation for Hierarchical Federated Learning [30.254515308020512]
Hierarchical federated learning (HFL) has emerged as an effective paradigm to enhance link quality between clients and the server.<n>We propose a robust hierarchical secure aggregation scheme, termed H-SecCoGC, which integrates coding strategies to enforce structured aggregation.
arXiv Detail & Related papers (2026-01-25T21:07:22Z) - Subgraph Federated Learning via Spectral Methods [52.40322201034717]
FedLap is a novel framework that captures inter-node dependencies while ensuring privacy and scalability.<n>We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy.
arXiv Detail & Related papers (2025-10-29T16:22:32Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference [13.453033795109155]
Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.<n>There is no established criterion for assessing the difficulty of model inversion attacks (MIAs)<n>We propose the first theoretical criterion to assess MIA difficulty in CI, identifying mutual information, entropy, and effective information volume as key influencing factors.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
We propose a game-theoretic framework modeling privacy protection as a Bayesian game between defender and attacker.<n>To address strategic complexity, we represent the defender's mixed strategy as a neural network generator mapping private datasets to public representations.<n>Our approach significantly outperforms state-of-the-art methods by generating stronger attacks and achieving better privacy-utility tradeoffs.
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Bridging Privacy and Robustness for Trustworthy Machine Learning [6.318638597489423]
Machine learning systems require inherent robustness against data perturbations and adversarial manipulations.<n>This paper systematically investigates the intricate theoretical relationships among Local Differential Privacy (LDP) and Maximum Bayesian Privacy (MBP)<n>We bridge these privacy concepts with algorithmic robustness, particularly within the Probably Approximately Correct (PAC) learning framework.
arXiv Detail & Related papers (2024-03-25T10:06:45Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.