Hierarchical Bayes Approach to Personalized Federated Unsupervised
Learning
- URL: http://arxiv.org/abs/2402.12537v2
- Date: Sun, 25 Feb 2024 20:32:50 GMT
- Title: Hierarchical Bayes Approach to Personalized Federated Unsupervised
Learning
- Authors: Kaan Ozkara, Bruce Huang, Ruida Zhou, Suhas Diggavi
- Abstract summary: We develop algorithms based on optimization criteria inspired by a hierarchical Bayesian statistical framework.
We develop adaptive algorithms that discover the balance between using limited local data and collaborative information.
We evaluate our proposed algorithms using synthetic and real data, demonstrating the effective sample amplification for personalized tasks.
- Score: 7.8583640700306585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical heterogeneity of clients' local data is an important
characteristic in federated learning, motivating personalized algorithms
tailored to the local data statistics. Though there has been a plethora of
algorithms proposed for personalized supervised learning, discovering the
structure of local data through personalized unsupervised learning is less
explored. We initiate a systematic study of such personalized unsupervised
learning by developing algorithms based on optimization criteria inspired by a
hierarchical Bayesian statistical framework. We develop adaptive algorithms
that discover the balance between using limited local data and collaborative
information. We do this in the context of two unsupervised learning tasks:
personalized dimensionality reduction and personalized diffusion models. We
develop convergence analyses for our adaptive algorithms which illustrate the
dependence on problem parameters (e.g., heterogeneity, local sample size). We
also develop a theoretical framework for personalized diffusion models, which
shows the benefits of collaboration even under heterogeneity. We finally
evaluate our proposed algorithms using synthetic and real data, demonstrating
the effective sample amplification for personalized tasks, induced through
collaboration, despite data heterogeneity.
Related papers
- Decentralized Personalized Federated Learning [4.5836393132815045]
We focus on creating a collaboration graph that guides each client in selecting suitable collaborators for training personalized models.
Unlike traditional methods, our formulation identifies collaborators at a granular level by considering greedy relations of clients.
We achieve this through a bi-level optimization framework that employs a constrained algorithm.
arXiv Detail & Related papers (2024-06-10T17:58:48Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Fed-QSSL: A Framework for Personalized Federated Learning under Bitwidth
and Data Heterogeneity [14.313847382199059]
Federated quantization-based self-supervised learning scheme (Fed-QSSL) designed to address heterogeneity in FL systems.
Fed-QSSL deploys de-quantization, weighted aggregation and re-quantization, ultimately creating models personalized to both data distribution and specific infrastructure of each client's device.
arXiv Detail & Related papers (2023-12-20T19:11:19Z) - Distributed Personalized Empirical Risk Minimization [19.087524494290676]
This paper advocates a new paradigm Personalized Empirical Risk Minimization (PERM) to facilitate learning from heterogeneous data sources.
We propose a distributed algorithm that replaces the standard model averaging with model shuffling to simultaneously optimize PERM objectives for all devices.
arXiv Detail & Related papers (2023-10-26T20:07:33Z) - Algorithmic Collective Action in Machine Learning [35.91866986642348]
We study algorithmic collective action on digital platforms that deploy machine learning algorithms.
We propose a simple theoretical model of a collective interacting with a firm's learning algorithm.
We conduct systematic experiments on a skill classification task involving tens of thousands of resumes from a gig platform for freelancers.
arXiv Detail & Related papers (2023-02-08T18:55:49Z) - A Generative Framework for Personalized Learning and Estimation: Theory,
Algorithms, and Privacy [10.27527187262914]
We develop a generative framework that could unify several different algorithms as well as suggest new algorithms.
We then use our generative framework for learning, which unifies several known personalized FL algorithms and also suggests new ones.
We also develop privacy for personalized learning methods with guarantees for user-level privacy and composition.
arXiv Detail & Related papers (2022-07-05T02:18:44Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.