Distributed Learning on Heterogeneous Resource-Constrained Devices
- URL: http://arxiv.org/abs/2006.05403v1
- Date: Tue, 9 Jun 2020 16:58:49 GMT
- Title: Distributed Learning on Heterogeneous Resource-Constrained Devices
- Authors: Martin Rapp, Ramin Khalili, J\"org Henkel
- Abstract summary: We consider a distributed system, consisting of a heterogeneous set of devices, ranging from low-end to high-end.
We propose the first approach that enables distributed learning in such a heterogeneous system.
Applying our approach, each device employs a neural network (NN) with a topology that fits its capabilities; however, part of these NNs share the same topology, so that their parameters can be jointly learned.
- Score: 3.6187468775839373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a distributed system, consisting of a heterogeneous set of
devices, ranging from low-end to high-end. These devices have different
profiles, e.g., different energy budgets, or different hardware specifications,
determining their capabilities on performing certain learning tasks. We propose
the first approach that enables distributed learning in such a heterogeneous
system. Applying our approach, each device employs a neural network (NN) with a
topology that fits its capabilities; however, part of these NNs share the same
topology, so that their parameters can be jointly learned. This differs from
current approaches, such as federated learning, which require all devices to
employ the same NN, enforcing a trade-off between achievable accuracy and
computational overhead of training. We evaluate heterogeneous distributed
learning for reinforcement learning (RL) and observe that it greatly improves
the achievable reward on more powerful devices, compared to current approaches,
while still maintaining a high reward on the weaker devices. We also explore
supervised learning, observing similar gains.
Related papers
- Federated Learning for Computationally-Constrained Heterogeneous
Devices: A Survey [3.219812767529503]
Federated learning (FL) offers a privacy-preserving trade-off between communication overhead and model accuracy.
We outline the challengesFL has to overcome to be widely applicable in real-world applications.
arXiv Detail & Related papers (2023-07-18T12:05:36Z) - Adaptive Parameterization of Deep Learning Models for Federated Learning [85.82002651944254]
Federated Learning offers a way to train deep neural networks in a distributed fashion.
It incurs a communication overhead as the model parameters or gradients need to be exchanged regularly during training.
In this paper, we propose to utilise parallel Adapters for Federated Learning.
arXiv Detail & Related papers (2023-02-06T17:30:33Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - DISTREAL: Distributed Resource-Aware Learning in Heterogeneous Systems [2.1506382989223782]
We study the problem of distributed training of neural networks (NNs) on devices with heterogeneous, limited, and time-varying availability of computational resources.
We present an adaptive, resource-aware, on-device learning mechanism, DISTREAL, which is able to fully and efficiently utilize the available resources.
arXiv Detail & Related papers (2021-12-16T10:15:31Z) - Distributed Learning for Time-varying Networks: A Scalable Design [13.657740129012804]
We propose a distributed learning framework based on a scalable deep neural network (DNN) design.
By exploiting the permutation equivalence and invariance properties of the learning tasks, the DNNs with different scales for different clients can be built up.
Model aggregation can also be conducted based on these two sub-matrices to improve the learning convergence and performance.
arXiv Detail & Related papers (2021-07-31T12:44:28Z) - Device Sampling for Heterogeneous Federated Learning: Theory,
Algorithms, and Implementation [24.084053136210027]
We develop a sampling methodology based on graph sequential convolutional networks (GCNs)
We find that our methodology while sampling less than 5% of all devices outperforms conventional federated learning (FedL) substantially both in terms of trained model accuracy and required resource utilization.
arXiv Detail & Related papers (2021-01-04T05:59:50Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Self-organizing Democratized Learning: Towards Large-scale Distributed
Learning Systems [71.14339738190202]
democratized learning (Dem-AI) lays out a holistic philosophy with underlying principles for building large-scale distributed and democratized machine learning systems.
Inspired by Dem-AI philosophy, a novel distributed learning approach is proposed in this paper.
The proposed algorithms demonstrate better results in the generalization performance of learning models in agents compared to the conventional FL algorithms.
arXiv Detail & Related papers (2020-07-07T08:34:48Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.