Joint Coding and Scheduling Optimization for Distributed Learning over
Wireless Edge Networks
- URL: http://arxiv.org/abs/2103.04303v2
- Date: Tue, 9 Mar 2021 04:20:00 GMT
- Title: Joint Coding and Scheduling Optimization for Distributed Learning over
Wireless Edge Networks
- Authors: Nguyen Van Huynh, Dinh Thai Hoang, Diep N. Nguyen, and Eryk Dutkiewicz
- Abstract summary: This article addresses problems by leveraging recent advances in coded computing and the deep dueling neural network architecture.
By introducing coded structures/redundancy, a distributed learning task can be completed without waiting for straggling nodes.
Simulations show that the proposed framework reduces the average learning delay in wireless edge computing up to 66% compared with other DL approaches.
- Score: 21.422040036286536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unlike theoretical distributed learning (DL), DL over wireless edge networks
faces the inherent dynamics/uncertainty of wireless connections and edge nodes,
making DL less efficient or even inapplicable under the highly dynamic wireless
edge networks (e.g., using mmW interfaces). This article addresses these
problems by leveraging recent advances in coded computing and the deep dueling
neural network architecture. By introducing coded structures/redundancy, a
distributed learning task can be completed without waiting for straggling
nodes. Unlike conventional coded computing that only optimizes the code
structure, coded distributed learning over the wireless edge also requires to
optimize the selection/scheduling of wireless edge nodes with heterogeneous
connections, computing capability, and straggling effects. However, even
neglecting the aforementioned dynamics/uncertainty, the resulting joint
optimization of coding and scheduling to minimize the distributed learning time
turns out to be NP-hard. To tackle this and to account for the dynamics and
uncertainty of wireless connections and edge nodes, we reformulate the problem
as a Markov Decision Process and then design a novel deep reinforcement
learning algorithm that employs the deep dueling neural network architecture to
find the jointly optimal coding scheme and the best set of edge nodes for
different learning tasks without explicit information about the wireless
environment and edge nodes' straggling parameters. Simulations show that the
proposed framework reduces the average learning delay in wireless edge
computing up to 66% compared with other DL approaches. The jointly optimal
framework in this article is also applicable to any distributed learning scheme
with heterogeneous and uncertain computing nodes.
Related papers
- Learning the Optimal Path and DNN Partition for Collaborative Edge Inference [4.368333109035076]
Deep Neural Networks (DNNs) have catalyzed the development of numerous intelligent mobile applications and services.
To address this, collaborative edge inference has been proposed.
This method involves partitioning a DNN inference task into several subtasks and distributing these across multiple network nodes.
We introduce a new bandit algorithm, B-EXPUCB, which combines elements of the classical blocked EXP3 and LinUCB algorithms, and demonstrate its sublinear regret.
arXiv Detail & Related papers (2024-10-02T01:12:16Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Lyapunov-Driven Deep Reinforcement Learning for Edge Inference Empowered
by Reconfigurable Intelligent Surfaces [30.1512069754603]
We propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge.
We consider a scenario where new data are continuously generated/collected by a set of devices and are handled through a dynamic queueing system.
arXiv Detail & Related papers (2023-05-18T12:46:42Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Efficient power allocation using graph neural networks and deep
algorithm unfolding [40.78748956518785]
We study the problem of optimal power allocation in a single-hop ad hoc wireless network.
We propose a hybrid neural architecture inspired by the unfolding of the algorithmic weighted minimum mean squared error (WMMSE)
We show that UWMMSE achieves robustness comparable to that of WMMSE while significantly reducing the computational complexity.
arXiv Detail & Related papers (2020-11-18T05:28:24Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.