AHSecAgg and TSKG: Lightweight Secure Aggregation for Federated Learning Without Compromise
- URL: http://arxiv.org/abs/2312.04937v2
- Date: Mon, 11 Dec 2023 04:16:37 GMT
- Title: AHSecAgg and TSKG: Lightweight Secure Aggregation for Federated Learning Without Compromise
- Authors: Siqing Zhang, Yong Liao, Pengyuan Zhou,
- Abstract summary: AHSecAgg is a lightweight secure aggregation protocol using additive homomorphic masks.
TSKG is a lightweight Threshold Signature based masking key generation method.
- Score: 7.908496863030483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leveraging federated learning (FL) to enable cross-domain privacy-sensitive data mining represents a vital breakthrough to accomplish privacy-preserving learning. However, attackers can infer the original user data by analyzing the uploaded intermediate parameters during the aggregation process. Therefore, secure aggregation has become a critical issue in the field of FL. Many secure aggregation protocols face the problem of high computation costs, which severely limits their applicability. To this end, we propose AHSecAgg, a lightweight secure aggregation protocol using additive homomorphic masks. AHSecAgg significantly reduces computation overhead without compromising the dropout handling capability or model accuracy. We prove the security of AHSecAgg in semi-honest and active adversary settings. In addition, in cross-silo scenarios where the group of participants is relatively fixed during each round, we propose TSKG, a lightweight Threshold Signature based masking key generation method. TSKG can generate different temporary secrets and shares for different aggregation rounds using the initial key and thus effectively eliminates the cost of secret sharing and key agreement. We prove TSKG does not sacrifice security. Extensive experiments show that AHSecAgg significantly outperforms state-of-the-art mask-based secure aggregation protocols in terms of computational efficiency, and TSKG effectively reduces the computation and communication costs for existing secure aggregation protocols.
Related papers
- Practical Secure Aggregation by Combining Cryptography and Trusted Execution Environments [1.3068730884406587]
Secure aggregation enables a group of mutually distrustful parties, each holding private inputs, to collaboratively compute an aggregate value.
A major challenge in adopting secure aggregation approaches for practical applications is the significant computational overhead of the underlying cryptographic protocols.
Hardware-based security techniques such as trusted execution environments (TEEs) enable computation at near-native speeds.
In this work, we introduce several secure aggregation architectures that integrate both cryptographic and TEE-based techniques.
arXiv Detail & Related papers (2025-04-11T07:49:09Z) - Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association [93.46811590752814]
Hierarchical secure aggregation is motivated by federated learning.
In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.
We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
arXiv Detail & Related papers (2025-03-06T15:53:37Z) - NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network Computing [10.150846654917753]
NET-SA is an efficient secure aggregation architecture for machine learning.
It reduces communication overhead due to eliminating the communication-intensive phases of seed agreement and secret sharing.
It achieves up to 77x and 12x enhancements in runtime and 2x decrease in total client communication cost compared with state-of-the-art methods.
arXiv Detail & Related papers (2025-01-02T10:27:06Z) - BiCert: A Bilinear Mixed Integer Programming Formulation for Precise Certified Bounds Against Data Poisoning Attacks [62.897993591443594]
Data poisoning attacks pose one of the biggest threats to modern AI systems.
Data poisoning attacks pose one of the biggest threats to modern AI systems.
Data poisoning attacks pose one of the biggest threats to modern AI systems.
arXiv Detail & Related papers (2024-12-13T14:56:39Z) - The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Secure Aggregation Meets Sparsification in Decentralized Learning [1.7010199949406575]
This paper introduces CESAR, a novel secure aggregation protocol for Decentralized Learning (DL)
CESAR provably defends against honest-but-curious adversaries and can be formally adapted to counteract collusion between them.
arXiv Detail & Related papers (2024-05-13T12:52:58Z) - Privacy-Preserving Distributed Learning for Residential Short-Term Load
Forecasting [11.185176107646956]
Power system load data can inadvertently reveal the daily routines of residential users, posing a risk to their property security.
We introduce a Markovian Switching-based distributed training framework, the convergence of which is substantiated through rigorous theoretical analysis.
Case studies employing real-world power system load data validate the efficacy of our proposed algorithm.
arXiv Detail & Related papers (2024-02-02T16:39:08Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Sparsified Secure Aggregation for Privacy-Preserving Federated Learning [1.2891210250935146]
We propose a lightweight gradient sparsification framework for secure aggregation.
Our theoretical analysis demonstrates that the proposed framework can significantly reduce the communication overhead of secure aggregation.
Our experiments demonstrate that our framework reduces the communication overhead by up to 7.8x, while also speeding up the wall clock training time by 1.13x, when compared to conventional secure aggregation benchmarks.
arXiv Detail & Related papers (2021-12-23T22:44:21Z) - Efficient Sparse Secure Aggregation for Federated Learning [0.20052993723676896]
We adapt compression-based federated techniques to additive secret sharing, leading to an efficient secure aggregation protocol.
We prove its privacy against malicious adversaries and its correctness in the semi-honest setting.
Compared to prior works on secure aggregation, our protocol has a lower communication and adaptable costs for a similar accuracy.
arXiv Detail & Related papers (2020-07-29T14:28:30Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.