Optimal Task Assignment to Heterogeneous Federated Learning Devices
- URL: http://arxiv.org/abs/2010.00239v1
- Date: Thu, 1 Oct 2020 07:58:48 GMT
- Title: Optimal Task Assignment to Heterogeneous Federated Learning Devices
- Authors: La\'ercio Lima Pilla (ParSys - LRI)
- Abstract summary: We investigate the problem of minimizing the duration of Federated Learning rounds by controlling how much data each device uses for training.
We propose a make-time algorithm named OLAR and prove that it provides optimal schedules.
Our results indicate that OLAR provides optimal solutions with a small execution time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning provides new opportunities for training machine learning
models while respecting data privacy. This technique is based on heterogeneous
devices that work together to iteratively train a model while never sharing
their own data. Given the synchronous nature of this training, the performance
of Federated Learning systems is dictated by the slowest devices, also known as
stragglers. In this paper, we investigate the problem of minimizing the
duration of Federated Learning rounds by controlling how much data each device
uses for training. We formulate this problem as a makespan minimization problem
with identical, independent, and atomic tasks that have to be assigned to
heterogeneous resources with non-decreasing cost functions while respecting
lower and upper limits of tasks per resource. Based on this formulation, we
propose a polynomial-time algorithm named OLAR and prove that it provides
optimal schedules. We evaluate OLAR in an extensive experimental evaluation
using simulation that includes comparisons to other algorithms from the state
of the art and new extensions to them. Our results indicate that OLAR provides
optimal solutions with a small execution time. They also show that the presence
of lower and upper limits of tasks per resource erase any benefits that
suboptimal heuristics could provide in terms of algorithm execution time.
Related papers
- Split Federated Learning Over Heterogeneous Edge Devices: Algorithm and Optimization [7.013344179232109]
Split Learning (SL) is a promising collaborative machine learning approach, enabling resource-constrained devices to train models without sharing raw data.
Current SL algorithms face limitations in training efficiency and suffer from prolonged latency.
We propose the Heterogeneous Split Federated Learning framework, which allows resource-constrained clients to train their personalized client-side models in parallel.
arXiv Detail & Related papers (2024-11-21T07:46:01Z) - A Human-Centered Approach for Improving Supervised Learning [0.44378250612683995]
This paper shows how we can strike a balance between performance, time, and resource constraints.
Another goal of this research is to make Ensembles more explainable and intelligible using the Human-Centered approach.
arXiv Detail & Related papers (2024-10-14T10:27:14Z) - Self-Contrastive Forward-Forward Algorithm [3.1361717406527667]
Forward-Forward (FF) algorithm relies on feedforward operations to optimize layer-wise objectives.
FF has failed to reach state-of-the-art performance on most standard benchmark tasks.
We propose Self-Contrastive Forward-Forward (SCFF) algorithm, a competitive training method aimed at closing this performance gap.
arXiv Detail & Related papers (2024-09-17T22:58:20Z) - Semi-Supervised One-Shot Imitation Learning [83.94646047695412]
One-shot Imitation Learning aims to imbue AI agents with the ability to learn a new task from a single demonstration.
We introduce the semi-supervised OSIL problem setting, where the learning agent is presented with a large dataset of trajectories.
We develop an algorithm specifically applicable to this semi-supervised OSIL setting.
arXiv Detail & Related papers (2024-08-09T18:11:26Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Communication-Efficient Device Scheduling for Federated Learning Using
Stochastic Optimization [26.559267845906746]
Time learning (FL) is a useful tool in distributed machine learning that utilizes users' local datasets in a privacy-preserving manner.
In this paper, we provide a novel convergence analysis non-efficient convergence bound algorithm.
We also develop a new selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication under a power constraint.
arXiv Detail & Related papers (2022-01-19T23:25:24Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Learning Centric Wireless Resource Allocation for Edge Computing:
Algorithm and Experiment [15.577056429740951]
Edge intelligence is an emerging network architecture that integrates sensing, communication, computing components, and supports various machine learning applications.
Existing methods ignore two important facts: 1) different models have heterogeneous demands on training data; 2) there is a mismatch between the simulated environment and the real-world environment.
This paper proposes the learning centric wireless resource allocation scheme that maximizes the worst learning performance of multiple tasks.
arXiv Detail & Related papers (2020-10-29T06:20:40Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Improving a State-of-the-Art Heuristic for the Minimum Latency Problem
with Data Mining [69.00394670035747]
Hybrid metaheuristics have become a trend in operations research.
A successful example combines the Greedy Randomized Adaptive Search Procedures (GRASP) and data mining techniques.
arXiv Detail & Related papers (2019-08-28T13:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.