Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association
- URL: http://arxiv.org/abs/2503.04564v5
- Date: Tue, 08 Jul 2025 11:18:54 GMT
- Title: Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association
- Authors: Xiang Zhang, Zhou Li, Kai Wan, Hua Sun, Mingyue Ji, Giuseppe Caire,
- Abstract summary: Hierarchical secure aggregation is motivated by federated learning (FL)<n>In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.<n>We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
- Score: 93.46811590752814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Secure aggregation is motivated by federated learning (FL) where a cloud server aims to compute an averaged model (i.e., weights of deep neural networks) of the locally-trained models of numerous clients, while adhering to data security requirements. Hierarchical secure aggregation (HSA) extends this concept to a three-layer hierarchical network, where clustered users communicate with the server through an intermediate layer of relays. In HSA, beyond conventional server security, relay security is also enforced to ensure that the relays remain oblivious to the users' inputs (an abstraction of the local models in FL). Existing study on HSA assumes that each user is associated with only one relay, limiting opportunities for coding across inter-cluster users to achieve efficient communication and key generation. In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays in a wrap-around manner. We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding-a well-known technique for efficient communication in distributed computing-along with a highly non-trivial security key design. We also derive novel converse bounds on the minimum achievable communication and key rates using information-theoretic arguments.
Related papers
- Edge-Assisted Collaborative Fine-Tuning for Multi-User Personalized Artificial Intelligence Generated Content (AIGC) [38.59865959433328]
Cloud-based solutions aid in computation but often fall short in addressing privacy risks, personalization efficiency, and communication costs.<n>We propose a novel cluster-aware hierarchical federated aggregation framework.<n>We show that the framework achieves accelerated convergence while maintaining practical viability for scalable multi-user personalized AIGC services.
arXiv Detail & Related papers (2025-08-06T06:07:24Z) - Information-Theoretic Decentralized Secure Aggregation with Collusion Resilience [98.31540557973179]
We study the problem of decentralized secure aggregation (DSA) from an information-theoretic perspective.<n>We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA.<n>Our results establish the fundamental performance limits of DSA, providing insights for the design of provably secure and communication-efficient protocols.
arXiv Detail & Related papers (2025-08-01T12:51:37Z) - Collusion-Resilient Hierarchical Secure Aggregation with Heterogeneous Security Constraints [42.80769898523078]
Motivated by federated learning (FL), secure aggregation aims to securely compute, as efficiently as possible, the sum of a set of inputs distributed across many users.<n>We study weakly-secure HSA (WS-HSA) with collusion resilience.<n>We characterize the optimal total key rate, i.e., the total number of independent key symbols required to ensure both server and relay security.
arXiv Detail & Related papers (2025-07-19T23:09:57Z) - Federated Learning-Enabled Hybrid Language Models for Communication-Efficient Token Transmission [87.68447072141402]
Hybrid Language Models (HLMs) combine the low-latency efficiency of Small Language Models (SLMs) on edge devices with the high accuracy of Large Language Models (LLMs) on centralized servers.<n>We propose FedHLM, a communication-efficient HLM framework that integrates uncertainty-aware inference with Federated Learning (FL)
arXiv Detail & Related papers (2025-06-30T02:56:11Z) - RLSA-PFL: Robust Lightweight Secure Aggregation with Model Inconsistency Detection in Privacy-Preserving Federated Learning [12.804623314091508]
Federated Learning (FL) allows users to collaboratively train a global machine learning model by sharing local model only, without exposing their private data to a central server.
Study have revealed privacy vulnerabilities in FL, where adversaries can potentially infer sensitive information from the shared model parameters.
We present an efficient masking-based secure aggregation scheme utilizing lightweight cryptographic primitives to privacy risks.
arXiv Detail & Related papers (2025-02-13T06:01:09Z) - NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network Computing [10.150846654917753]
NET-SA is an efficient secure aggregation architecture for machine learning.<n>It reduces communication overhead due to eliminating the communication-intensive phases of seed agreement and secret sharing.<n>It achieves up to 77x and 12x enhancements in runtime and 2x decrease in total client communication cost compared with state-of-the-art methods.
arXiv Detail & Related papers (2025-01-02T10:27:06Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization [12.83265009728818]
We propose a novel uplink communication compression method for federated learning, named FedMPQ.
In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed.
Experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy.
arXiv Detail & Related papers (2024-04-21T08:27:36Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Sparsity-Aware Intelligent Massive Random Access Control in Open RAN: A
Reinforcement Learning Based Approach [61.74489383629319]
Massive random access of devices in the emerging Open Radio Access Network (O-RAN) brings great challenge to the access control and management.
reinforcement-learning (RL)-assisted scheme of closed-loop access control is proposed to preserve sparsity of access requests.
Deep-RL-assisted SAUD is proposed to resolve highly complex environments with continuous and high-dimensional state and action spaces.
arXiv Detail & Related papers (2023-03-05T12:25:49Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - ScionFL: Efficient and Robust Secure Quantized Aggregation [36.668162197302365]
We introduce ScionFL, the first secure aggregation framework for federated learning.
It operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients.
We show that with no overhead for clients and moderate overhead for the server, we obtain comparable accuracy for standard FL benchmarks.
arXiv Detail & Related papers (2022-10-13T21:46:55Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Symbolic Reinforcement Learning for Safe RAN Control [62.997667081978825]
We show a Symbolic Reinforcement Learning (SRL) architecture for safe control in Radio Access Network (RAN) applications.
In our tool, a user can select a high-level safety specifications expressed in Linear Temporal Logic (LTL) to shield an RL agent running in a given cellular network.
We demonstrate the user interface (UI) helping the user set intent specifications to the architecture and inspect the difference in allowed and blocked actions.
arXiv Detail & Related papers (2021-03-11T10:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.