Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks
- URL: http://arxiv.org/abs/2011.12469v1
- Date: Wed, 25 Nov 2020 01:29:41 GMT
- Title: Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks
- Authors: Minh N. H. Nguyen, Nguyen H. Tran, Yan Kyaw Tun, Zhu Han, Choong Seon
Hong
- Abstract summary: We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
- Score: 88.15736037284408
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated Learning is a new learning scheme for collaborative training a
shared prediction model while keeping data locally on participating devices. In
this paper, we study a new model of multiple federated learning services at the
multi-access edge computing server. Accordingly, the sharing of CPU resources
among learning services at each mobile device for the local training process
and allocating communication resources among mobile devices for exchanging
learning information must be considered. Furthermore, the convergence
performance of different learning services depends on the hyper-learning rate
parameter that needs to be precisely decided. Towards this end, we propose a
joint resource optimization and hyper-learning rate control problem, namely
MS-FEDL, regarding the energy consumption of mobile devices and overall
learning time. We design a centralized algorithm based on the block coordinate
descent method and a decentralized JP-miADMM algorithm for solving the MS-FEDL
problem. Different from the centralized approach, the decentralized approach
requires many iterations to obtain but it allows each learning service to
independently manage the local resource and learning process without revealing
the learning service information. Our simulation results demonstrate the
convergence performance of our proposed algorithms and the superior performance
of our proposed algorithms compared to the heuristic strategy.
Related papers
- Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices [12.513477328344255]
Federated learning (FL) is a technique for distributed machine learning (ML)
In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation.
We develop a novel methodology for fully decentralized FL, where devices conduct model aggregation via cooperative consensus formation.
arXiv Detail & Related papers (2022-11-23T00:04:05Z) - PMFL: Partial Meta-Federated Learning for heterogeneous tasks and its
applications on real-world medical records [11.252157002705484]
Federated machine learning is a versatile and flexible tool to utilize distributed data from different sources.
We propose a new algorithm, which is an integration of federated learning and meta-learning, to tackle this issue.
We show that our algorithm could obtain the fastest training speed and achieve the best performance when dealing with heterogeneous medical datasets.
arXiv Detail & Related papers (2021-12-10T03:55:03Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Federated Multi-Agent Actor-Critic Learning for Age Sensitive Mobile
Edge Computing [16.49587367235662]
Mobile edge computing (MEC) introduces a new processing scheme for various distributed communication-computing systems.
We formulate a kind of age-sensitive MEC models and define the average age of information (AoI) minimization problems of interests.
A novel policy based multi-agent deep reinforcement learning (RL) framework, called heterogeneous multi-agent actor critic (H-MAAC), is proposed as a paradigm for joint collaboration.
arXiv Detail & Related papers (2020-12-28T08:19:26Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z) - Combining Federated and Active Learning for Communication-efficient
Distributed Failure Prediction in Aeronautics [0.0]
We present a new centralized distributed learning algorithm that relies on the learning paradigms of Active Learning and Federated Learning.
We evaluate this method on a public benchmark and show that its performances in terms of precision are very close to state-of-the-art performance level of non-distributed learning.
arXiv Detail & Related papers (2020-01-21T13:17:00Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.