Communication-Efficient and Privacy-Adaptable Mechanism -- a Federated Learning Scheme with Convergence Analysis
- URL: http://arxiv.org/abs/2601.10701v1
- Date: Thu, 15 Jan 2026 18:55:00 GMT
- Title: Communication-Efficient and Privacy-Adaptable Mechanism -- a Federated Learning Scheme with Convergence Analysis
- Authors: Chun Hei Michael Shiu, Chih Wei Ling,
- Abstract summary: Federated learning enables multiple parties to jointly train learning models without sharing their own underlying data.<n>Recent work introduced a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>We theoretically analyze the privacy guarantees and convergence properties of CEPAM.
- Score: 4.726777092009554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning enables multiple parties to jointly train learning models without sharing their own underlying data, offering a practical pathway to privacy-preserving collaboration under data-governance constraints. Continued study of federated learning is essential to address key challenges in it, including communication efficiency and privacy protection between parties. A recent line of work introduced a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), which achieves both objectives simultaneously. CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a randomized vector quantizer whose quantization error is equivalent to a prescribed noise, which can be tuned to customize privacy protection between parties. In this work, we theoretically analyze the privacy guarantees and convergence properties of CEPAM. Moreover, we assess CEPAM's utility performance through experimental evaluations, including convergence profiles compared with other baselines, and accuracy-privacy trade-offs between different parties.
Related papers
- Federated Attention: A Distributed Paradigm for Collaborative LLM Inference over Edge Networks [63.541114376141735]
Large language models (LLMs) are proliferating rapidly at the edge, delivering intelligent capabilities across diverse application scenarios.<n>However, their practical deployment in collaborative scenarios confronts fundamental challenges: privacy vulnerabilities, communication overhead, and computational bottlenecks.<n>We propose Federated Attention (FedAttn), which integrates the federated paradigm into the self-attention mechanism.
arXiv Detail & Related papers (2025-11-04T15:14:58Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Generalizing Differentially Private Decentralized Deep Learning with Multi-Agent Consensus [11.414398732656839]
We propose a framework that embeds differential privacy into decentralized deep learning and secures each agent's local dataset during and after cooperative training.
We prove convergence guarantees for algorithms derived from this framework and demonstrate its practical utility when applied to subgradient and ADMM decentralized approaches.
arXiv Detail & Related papers (2023-06-24T07:46:00Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - No free lunch theorem for security and utility in federated learning [20.481170500480395]
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
This article illustrates a general framework that formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view.
arXiv Detail & Related papers (2022-03-11T09:48:29Z) - ppAURORA: Privacy Preserving Area Under Receiver Operating
Characteristic and Precision-Recall Curves [1.933681537640272]
An AUC is a performance measure to compare the quality of different machine learning models.
It can also be a problem to compute the global AUC, since the labels might also contain privacy-sensitive information.
We propose an MPC-based solution, called ppAURORA, with private merging of individually sorted lists from multiple sources to compute the exact AUC.
arXiv Detail & Related papers (2021-02-17T14:30:22Z) - An Accurate, Scalable and Verifiable Protocol for Federated
Differentially Private Averaging [0.0]
We tackle challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties.
Our first contribution is a scalable protocol in which participants exchange correlated Gaussian noise along the edges of a network graph.
Our second contribution enables users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol.
arXiv Detail & Related papers (2020-06-12T14:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.