Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association
- URL: http://arxiv.org/abs/2503.04564v2
- Date: Fri, 07 Mar 2025 10:01:49 GMT
- Title: Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association
- Authors: Xiang Zhang, Zhou Li, Kai Wan, Hua Sun, Mingyue Ji, Giuseppe Caire,
- Abstract summary: Hierarchical secure aggregation is motivated by federated learning.<n>In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.<n>We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
- Score: 93.46811590752814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Secure aggregation is motivated by federated learning (FL) where a cloud server aims to compute an averaged model (i.e., weights of deep neural networks) of the locally-trained models of numerous clients, while adhering to data security requirements. Hierarchical secure aggregation (HSA) extends this concept to a three-layer network, where clustered users communicate with the server through an intermediate layer of relays. In HSA, beyond conventional server security, relay security is also enforced to ensure that the relays remain oblivious to the users' inputs (an abstraction of the local models in FL). Existing study on HSA assumes that each user is associated with only one relay, limiting opportunities for coding across inter-cluster users to achieve efficient communication and key generation. In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays in a wrap-around manner. We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding-a well-known technique for efficient communication in distributed computing-along with a highly nontrivial security key design. We also derive novel converse bounds on the minimum achievable communication and key rates using information-theoretic arguments.
Related papers
- RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning [12.804623314091508]
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server.
Study have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters.
We present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to privacy risks.
arXiv Detail & Related papers (2025-02-13T06:01:09Z) - NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network Computing [10.150846654917753]
NET-SA is an efficient secure aggregation architecture for machine learning.<n>It reduces communication overhead due to eliminating the communication-intensive phases of seed agreement and secret sharing.<n>It achieves up to 77x and 12x enhancements in runtime and 2x decrease in total client communication cost compared with state-of-the-art methods.
arXiv Detail & Related papers (2025-01-02T10:27:06Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization [12.83265009728818]
We propose a novel uplink communication compression method for federated learning, named FedMPQ.
In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed.
Experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy.
arXiv Detail & Related papers (2024-04-21T08:27:36Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A
Reinforcement Learning Based Approach [61.74489383629319]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management.
reinforcement-learning (RL)-assisted scheme of closed-loop access control is proposed to preserve sparsity of access requests.
Deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces.
arXiv Detail & Related papers (2023-03-05T12:25:49Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - ScionFL: Efficient and Robust Secure Quantized Aggregation [36.668162197302365]
We introduce ScionFL, the first secure aggregation framework for federated learning.
It operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.
We show that with no overhead for clients and moderate overhead for the server, we obtain comparable accuracy for standard FL benchmarks.
arXiv Detail & Related papers (2022-10-13T21:46:55Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Symbolic Reinforcement Learning for Safe RAN Control [62.997667081978825]
We show a Symbolic Reinforcement Learning (SRL) architecture for safe control in Radio Access Network (RAN) applications.
In our tool, a user can select a high-level safety specifications expressed in Linear Temporal Logic (LTL) to shield an RL agent running in a given cellular network.
We demonstrate the user interface (UI) helping the user set intent specifications to the architecture and inspect the difference in allowed and blocked actions.
arXiv Detail & Related papers (2021-03-11T10:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.