A Generative Framework for Personalized Learning and Estimation: Theory,
Algorithms, and Privacy
- URL: http://arxiv.org/abs/2207.01771v1
- Date: Tue, 5 Jul 2022 02:18:44 GMT
- Title: A Generative Framework for Personalized Learning and Estimation: Theory,
Algorithms, and Privacy
- Authors: Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi
- Abstract summary: We develop a generative framework that could unify several different algorithms as well as suggest new algorithms.
We then use our generative framework for learning, which unifies several known personalized FL algorithms and also suggests new ones.
We also develop privacy for personalized learning methods with guarantees for user-level privacy and composition.
- Score: 10.27527187262914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A distinguishing characteristic of federated learning is that the (local)
client data could have statistical heterogeneity. This heterogeneity has
motivated the design of personalized learning, where individual (personalized)
models are trained, through collaboration. There have been various
personalization methods proposed in literature, with seemingly very different
forms and methods ranging from use of a single global model for local
regularization and model interpolation, to use of multiple global models for
personalized clustering, etc. In this work, we begin with a generative
framework that could potentially unify several different algorithms as well as
suggest new algorithms. We apply our generative framework to personalized
estimation, and connect it to the classical empirical Bayes' methodology. We
develop private personalized estimation under this framework. We then use our
generative framework for learning, which unifies several known personalized FL
algorithms and also suggests new ones; we propose and study a new algorithm
AdaPeD based on a Knowledge Distillation, which numerically outperforms several
known algorithms. We also develop privacy for personalized learning methods
with guarantees for user-level privacy and composition. We numerically evaluate
the performance as well as the privacy for both the estimation and learning
problems, demonstrating the advantages of our proposed methods.
Related papers
- Blind Federated Learning without initial model [1.104960878651584]
Federated learning is an emerging machine learning approach that allows the construction of a model between several participants who hold their own private data.
This method is secure and privacy-preserving, suitable for training a machine learning model using sensitive data from different sources, such as hospitals.
arXiv Detail & Related papers (2024-04-24T20:10:10Z) - Personalized Federated Learning via Stacking [0.0]
We present a novel personalization approach based on stacked generalization where clients directly send each other privacy-preserving models to be used as base models to train a meta-model on private data.
Our approach is flexible, accommodating various privacy-preserving techniques and model types, and can be applied in horizontal, hybrid, and vertically partitioned federations.
arXiv Detail & Related papers (2024-04-16T23:47:23Z) - Hierarchical Bayes Approach to Personalized Federated Unsupervised
Learning [7.8583640700306585]
We develop algorithms based on optimization criteria inspired by a hierarchical Bayesian statistical framework.
We develop adaptive algorithms that discover the balance between using limited local data and collaborative information.
We evaluate our proposed algorithms using synthetic and real data, demonstrating the effective sample amplification for personalized tasks.
arXiv Detail & Related papers (2024-02-19T20:53:27Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Model-Based Deep Learning: On the Intersection of Deep Learning and
Optimization [101.32332941117271]
Decision making algorithms are used in a multitude of different applications.
Deep learning approaches that use highly parametric architectures tuned from data without relying on mathematical models are becoming increasingly popular.
Model-based optimization and data-centric deep learning are often considered to be distinct disciplines.
arXiv Detail & Related papers (2022-05-05T13:40:08Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - A Federated Learning Aggregation Algorithm for Pervasive Computing:
Evaluation and Comparison [0.6299766708197883]
Pervasive computing promotes the installation of connected devices in our living spaces in order to provide services.
Two major developments have gained significant momentum recently: an advanced use of edge resources and the integration of machine learning techniques for engineering applications.
We propose a novel aggregation algorithm, termed FedDist, which is able to modify its model architecture by identifying dissimilarities between specific neurons amongst the clients.
arXiv Detail & Related papers (2021-10-19T19:43:28Z) - Three Approaches for Personalization with Applications to Federated
Learning [68.19709953755238]
We present a systematic learning-theoretic study of personalization.
We provide learning-theoretic guarantees and efficient algorithms for which we also demonstrate the performance.
All of our algorithms are model-agnostic and work for any hypothesis class.
arXiv Detail & Related papers (2020-02-25T01:36:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.