Federated PAC-Bayesian Learning on Non-IID data
- URL: http://arxiv.org/abs/2309.06683v1
- Date: Wed, 13 Sep 2023 02:44:01 GMT
- Title: Federated PAC-Bayesian Learning on Non-IID data
- Authors: Zihao Zhao, Yang Liu, Wenbo Ding, Xiao-Ping Zhang
- Abstract summary: We present the first non-vacuous federated PAC-Bayesian bound tailored for non-IID local data.
We introduce an objective function and an innovative Gibbs-based algorithm for the optimization of the derived bound.
- Score: 18.838513808688287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing research has either adapted the Probably Approximately Correct (PAC)
Bayesian framework for federated learning (FL) or used information-theoretic
PAC-Bayesian bounds while introducing their theorems, but few considering the
non-IID challenges in FL. Our work presents the first non-vacuous federated
PAC-Bayesian bound tailored for non-IID local data. This bound assumes unique
prior knowledge for each client and variable aggregation weights. We also
introduce an objective function and an innovative Gibbs-based algorithm for the
optimization of the derived bound. The results are validated on real-world
datasets.
Related papers
- Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets [25.250314934981233]
We first apply the PAC-Bayesian framework on random sets' in a rigorous way, where the training algorithm is assumed to output a data-dependent hypothesis set.
This approach allows us to prove data-dependent bounds, which can be applicable in numerous contexts.
arXiv Detail & Related papers (2024-04-26T14:28:18Z) - Personalized Federated Learning of Probabilistic Models: A PAC-Bayesian
Approach [42.59649764999974]
Federated learning aims to infer a shared model from private and decentralized data stored locally by multiple clients.
We propose a PFL algorithm named PAC-PFL for learning probabilistic models within a PAC-Bayesian framework.
Our algorithm collaboratively learns a shared hyper-posterior and regards each client's posterior inference as the step personalization.
arXiv Detail & Related papers (2024-01-16T13:30:37Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Going beyond research datasets: Novel intent discovery in the industry
setting [60.90117614762879]
This paper proposes methods to improve the intent discovery pipeline deployed in a large e-commerce platform.
We show the benefit of pre-training language models on in-domain data: both self-supervised and with weak supervision.
We also devise the best method to utilize the conversational structure (i.e., question and answer) of real-life datasets during fine-tuning for clustering tasks, which we call Conv.
arXiv Detail & Related papers (2023-05-09T14:21:29Z) - DPP-based Client Selection for Federated Learning with Non-IID Data [97.1195165400568]
This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL)
We first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training.
We leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP$3$S)
arXiv Detail & Related papers (2023-03-30T13:14:54Z) - Differentially Private Federated Clustering over Non-IID Data [59.611244450530315]
clustering clusters (FedC) problem aims to accurately partition unlabeled data samples distributed over massive clients into finite clients under the orchestration of a server.
We propose a novel FedC algorithm using differential privacy convergence technique, referred to as DP-Fed, in which partial participation and multiple clients are also considered.
Various attributes of the proposed DP-Fed are obtained through theoretical analyses of privacy protection, especially for the case of non-identically and independently distributed (non-i.i.d.) data.
arXiv Detail & Related papers (2023-01-03T05:38:43Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Rethinking Data Heterogeneity in Federated Learning: Introducing a New
Notion and Standard Benchmarks [65.34113135080105]
We show that not only the issue of data heterogeneity in current setups is not necessarily a problem but also in fact it can be beneficial for the FL participants.
Our observations are intuitive.
Our code is available at https://github.com/MMorafah/FL-SC-NIID.
arXiv Detail & Related papers (2022-09-30T17:15:19Z) - Information Complexity and Generalization Bounds [0.0]
We show a unifying picture of PAC-Bayesian and mutual information-based upper bounds on randomized learning algorithms.
We discuss two practical examples for learning with neural networks, namely, Entropy- and PAC-Bayes- SGD.
arXiv Detail & Related papers (2021-05-04T20:37:57Z) - Sample-based and Feature-based Federated Learning via Mini-batch SSCA [18.11773963976481]
This paper investigates sample-based and feature-based federated optimization.
We show that the proposed algorithms can preserve data privacy through the model aggregation mechanism.
We also show that the proposed algorithms converge to Karush-Kuhn-Tucker points of the respective federated optimization problems.
arXiv Detail & Related papers (2021-04-13T08:23:46Z) - PAC-Bayes Bounds for Meta-learning with Data-Dependent Prior [36.38937352131301]
We derive three novel generalisation error bounds for meta-learning based on PAC-Bayes relative entropy bound.
Experiments illustrate that the proposed three PAC-Bayes bounds for meta-learning guarantee a competitive generalization performance guarantee.
arXiv Detail & Related papers (2021-02-07T09:03:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.