Energy-Efficient Multi-Orchestrator Mobile Edge Learning
- URL: http://arxiv.org/abs/2109.00757v1
- Date: Thu, 2 Sep 2021 07:37:10 GMT
- Title: Energy-Efficient Multi-Orchestrator Mobile Edge Learning
- Authors: Mhd Saria Allahham, Sameh Sorour, Amr Mohamed, Aiman Erbad, Mohsen
Guizani
- Abstract summary: Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
- Score: 54.28419430315478
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mobile Edge Learning (MEL) is a collaborative learning paradigm that features
distributed training of Machine Learning (ML) models over edge devices (e.g.,
IoT devices). In MEL, possible coexistence of multiple learning tasks with
different datasets may arise. The heterogeneity in edge devices' capabilities
will require the joint optimization of the learners-orchestrator association
and task allocation. To this end, we aim to develop an energy-efficient
framework for learners-orchestrator association and learning task allocation,
in which each orchestrator gets associated with a group of learners with the
same learning task based on their communication channel qualities and
computational resources, and allocate the tasks accordingly. Therein, a multi
objective optimization problem is formulated to minimize the total energy
consumption and maximize the learning tasks' accuracy. However, solving such
optimization problem requires centralization and the presence of the whole
environment information at a single entity, which becomes impractical in
large-scale systems. To reduce the solution complexity and to enable solution
decentralization, we propose lightweight heuristic algorithms that can achieve
near-optimal performance and facilitate the trade-offs between energy
consumption, accuracy, and solution complexity. Simulation results show that
the proposed approaches reduce the energy consumption significantly while
executing multiple learning tasks compared to recent state-of-the-art methods.
Related papers
- Context-Aware Orchestration of Energy-Efficient Gossip Learning Schemes [8.382766344930157]
We present a distributed training approach based on the combination of Gossip Learning with adaptive optimization of the learning process.
We propose a data-driven approach to OGL management that relies on optimizing in real-time for each node.
Results suggest that our approach is highly efficient and effective in a broad spectrum of network scenarios.
arXiv Detail & Related papers (2024-04-18T09:17:46Z) - Multi-Objective Optimization Using Adaptive Distributed Reinforcement Learning [8.471466670802815]
We propose a multi-objective, multi-agent reinforcement learning (MARL) algorithm with high learning efficiency and low computational requirements.
We test our algorithm in an ITS environment with edge cloud computing.
Our algorithm also addresses various practical concerns with its modularized and asynchronous online training method.
arXiv Detail & Related papers (2024-03-13T18:05:16Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - ESFL: Efficient Split Federated Learning over Resource-Constrained Heterogeneous Wireless Devices [22.664980594996155]
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data.
We propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server.
arXiv Detail & Related papers (2024-02-24T20:50:29Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Multi-Resource Allocation for On-Device Distributed Federated Learning
Systems [79.02994855744848]
This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system.
Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively.
arXiv Detail & Related papers (2022-11-01T14:16:05Z) - Cost-Effective Federated Learning in Mobile Edge Networks [37.16466118235272]
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model without sharing their raw data.
We analyze how to design adaptive FL in mobile edge networks that optimally chooses essential control variables to minimize the total cost.
We develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters.
arXiv Detail & Related papers (2021-09-12T03:02:24Z) - Energy-Efficient and Federated Meta-Learning via Projected Stochastic
Gradient Ascent [79.58680275615752]
We propose an energy-efficient federated meta-learning framework.
We assume each task is owned by a separate agent, so a limited number of tasks is used to train a meta-model.
arXiv Detail & Related papers (2021-05-31T08:15:44Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.