An Information-Theoretic Analysis of The Cost of Decentralization for
Learning and Inference Under Privacy Constraints
- URL: http://arxiv.org/abs/2110.05014v1
- Date: Mon, 11 Oct 2021 05:55:30 GMT
- Title: An Information-Theoretic Analysis of The Cost of Decentralization for
Learning and Inference Under Privacy Constraints
- Authors: Sharu Theresa Jose, Osvaldo Simeone
- Abstract summary: In vertical federated learning, the features of a data sample are distributed across multiple agents.
A fundamental theoretical question is how to quantify the cost, or performance loss, of decentralization for learning and/or inference.
- Score: 44.320945743871285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In vertical federated learning (FL), the features of a data sample are
distributed across multiple agents. As such, inter-agent collaboration can be
beneficial not only during the learning phase, as is the case for standard
horizontal FL, but also during the inference phase. A fundamental theoretical
question in this setting is how to quantify the cost, or performance loss, of
decentralization for learning and/or inference. In this paper, we consider
general supervised learning problems with any number of agents, and provide a
novel information-theoretic quantification of the cost of decentralization in
the presence of privacy constraints on inter-agent communication within a
Bayesian framework. The cost of decentralization for learning and/or inference
is shown to be quantified in terms of conditional mutual information terms
involving features and label variables.
Related papers
- Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization [16.418338197742287]
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source.
Recent findings suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models.
We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection.
arXiv Detail & Related papers (2024-07-12T15:01:09Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - Conformal Prediction for Federated Uncertainty Quantification Under
Label Shift [57.54977668978613]
Federated Learning (FL) is a machine learning framework where many clients collaboratively train models.
We develop a new conformal prediction method based on quantile regression and take into account privacy constraints.
arXiv Detail & Related papers (2023-06-08T11:54:58Z) - When Decentralized Optimization Meets Federated Learning [41.58479981773202]
Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
arXiv Detail & Related papers (2023-06-05T03:51:14Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Quantization for decentralized learning under subspace constraints [61.59416703323886]
We consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints.
We propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates.
The analysis shows that, under some general conditions on the quantization noise, the strategy is stable both in terms of mean-square error and average bit rate.
arXiv Detail & Related papers (2022-09-16T09:38:38Z) - Secure Distributed/Federated Learning: Prediction-Privacy Trade-Off for
Multi-Agent System [4.190359509901197]
In the big data era, performing inference within the distributed and federated learning (DL and FL) frameworks, the central server needs to process a large amount of data.
Considering the decentralized computing topology, privacy has become a first-class concern.
We study the textitprivacy-aware server to multi-agent assignment problem subject to information processing constraints associated with each agent.
arXiv Detail & Related papers (2022-04-24T19:19:20Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Differential Privacy Meets Federated Learning under Communication
Constraints [20.836834724783007]
This paper investigates the trade-offs between communication costs and training variance under a resource-constrained federated system.
The results provide important insights into designing practical privacy-aware federated learning systems.
arXiv Detail & Related papers (2021-01-28T19:20:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.