GTFLAT: Game Theory Based Add-On For Empowering Federated Learning
Aggregation Techniques
- URL: http://arxiv.org/abs/2212.04103v1
- Date: Thu, 8 Dec 2022 06:39:51 GMT
- Title: GTFLAT: Game Theory Based Add-On For Empowering Federated Learning
Aggregation Techniques
- Authors: Hamidreza Mahini, Hamid Mousavi, Masoud Daneshtalab
- Abstract summary: GTFLAT, as a game theory-based add-on, addresses an important research question.
How can a federated learning algorithm achieve better performance and training efficiency by setting more effective adaptive weights for averaging in the model aggregation phase?
The results reveal that, on average, using GTFLAT increases the top-1 test accuracy by 1.38%, while it needs 21.06% fewer communication rounds to reach the accuracy.
- Score: 0.3867363075280543
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: GTFLAT, as a game theory-based add-on, addresses an important research
question: How can a federated learning algorithm achieve better performance and
training efficiency by setting more effective adaptive weights for averaging in
the model aggregation phase? The main objectives for the ideal method of
answering the question are: (1) empowering federated learning algorithms to
reach better performance in fewer communication rounds, notably in the face of
heterogeneous scenarios, and last but not least, (2) being easy to use
alongside the state-of-the-art federated learning algorithms as a new module.
To this end, GTFLAT models the averaging task as a strategic game among active
users. Then it proposes a systematic solution based on the population game and
evolutionary dynamics to find the equilibrium. In contrast with existing
approaches that impose the weights on the participants, GTFLAT concludes a
self-enforcement agreement among clients in a way that none of them is
motivated to deviate from it individually. The results reveal that, on average,
using GTFLAT increases the top-1 test accuracy by 1.38%, while it needs 21.06%
fewer communication rounds to reach the accuracy.
Related papers
- pFedGame -- Decentralized Federated Learning using Game Theory in Dynamic Topology [1.1970409518725493]
pFedGame is proposed for decentralized federated learning, best suitable for temporally dynamic networks.
The proposed algorithm works without any centralized server for aggregation.
Experiments performed to assess the performance of pFedGame have shown promising results with accuracy higher than 70% for heterogeneous data.
arXiv Detail & Related papers (2024-10-05T06:39:16Z) - Towards Dynamic Resource Allocation and Client Scheduling in Hierarchical Federated Learning: A Two-Phase Deep Reinforcement Learning Approach [40.082601481580426]
Federated learning is a viable technique to train a shared machine learning model without sharing data.
This paper presents a new two-phase deep deterministic policy gradient (DDPG) framework to balance online the learning delay and model accuracy of an FL process.
arXiv Detail & Related papers (2024-06-21T07:01:23Z) - Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - A Reinforcement Learning-assisted Genetic Programming Algorithm for Team
Formation Problem Considering Person-Job Matching [70.28786574064694]
A reinforcement learning-assisted genetic programming algorithm (RL-GP) is proposed to enhance the quality of solutions.
The hyper-heuristic rules obtained through efficient learning can be utilized as decision-making aids when forming project teams.
arXiv Detail & Related papers (2023-04-08T14:32:12Z) - Proof of Swarm Based Ensemble Learning for Federated Learning
Applications [3.2536767864585663]
In federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns.
Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications.
We propose PoSw, a novel distributed consensus algorithm for ensemble learning in a federated setting.
arXiv Detail & Related papers (2022-12-28T13:53:34Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Fairness and Accuracy in Federated Learning [17.218814060589956]
This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa)
It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model.
An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed.
arXiv Detail & Related papers (2020-12-18T06:28:37Z) - Improving Auto-Augment via Augmentation-Wise Weight Sharing [123.71986174280741]
A key component of automatic augmentation search is the evaluation process for a particular augmentation policy.
In this paper, we dive into the dynamics of augmented training of the model.
We design a powerful and efficient proxy task based on the Augmentation-Wise Weight Sharing (AWS) to form a fast yet accurate evaluation process.
arXiv Detail & Related papers (2020-09-30T15:23:12Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.