Online Meta-Learning for Model Update Aggregation in Federated Learning
for Click-Through Rate Prediction
- URL: http://arxiv.org/abs/2209.00629v1
- Date: Tue, 30 Aug 2022 18:13:53 GMT
- Title: Online Meta-Learning for Model Update Aggregation in Federated Learning
for Click-Through Rate Prediction
- Authors: Xianghang Liu, Bart{\l}omiej Twardowski, Tri Kurniawan Wijaya
- Abstract summary: We propose a simple online meta-learning method to learn a strategy of aggregating the model updates.
Our method significantly outperforms the state-of-the-art in both the speed of convergence and the quality of the final learning results.
- Score: 2.9649783577150837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Federated Learning (FL) of click-through rate (CTR) prediction, users'
data is not shared for privacy protection. The learning is performed by
training locally on client devices and communicating only model changes to the
server. There are two main challenges: (i) the client heterogeneity, making FL
algorithms that use the weighted averaging to aggregate model updates from the
clients have slow progress and unsatisfactory learning results; and (ii) the
difficulty of tuning the server learning rate with trial-and-error methodology
due to the big computation time and resources needed for each experiment. To
address these challenges, we propose a simple online meta-learning method to
learn a strategy of aggregating the model updates, which adaptively weighs the
importance of the clients based on their attributes and adjust the step sizes
of the update. We perform extensive evaluations on public datasets. Our method
significantly outperforms the state-of-the-art in both the speed of convergence
and the quality of the final learning results.
Related papers
- FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Elastically-Constrained Meta-Learner for Federated Learning [3.032797107899338]
Federated learning is an approach to collaboratively machine learning models for multiple parties that prohibit data sharing.
One of the challenges in federated learning is non-constrained data between clients, as a model can not fit data distribution for all clients.
arXiv Detail & Related papers (2023-06-29T05:58:47Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Tackling Dynamics in Federated Incremental Learning with Variational
Embedding Rehearsal [27.64806509651952]
We propose a novel algorithm to address the incremental learning process in an FL scenario.
We first propose using deep Variational Embeddings that secure the privacy of the client data.
Second, we propose a server-side training method that enables a model to rehearse the previously learnt knowledge.
arXiv Detail & Related papers (2021-10-19T02:26:35Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - CatFedAvg: Optimising Communication-efficiency and Classification
Accuracy in Federated Learning [2.2172881631608456]
We introduce a new family of Federated Learning algorithms called CatFedAvg.
It improves the communication efficiency but improves the quality of learning using a category coverage inNIST strategy.
Our experiments show that an increase of 10% absolute points accuracy using the M dataset with 70% absolute points lower network transfer over FedAvg.
arXiv Detail & Related papers (2020-11-14T06:52:02Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.