Asynchronous Decentralized Learning of a Neural Network
- URL: http://arxiv.org/abs/2004.05082v1
- Date: Fri, 10 Apr 2020 15:53:37 GMT
- Title: Asynchronous Decentralized Learning of a Neural Network
- Authors: Xinyue Liang, Alireza M. Javid, Mikael Skoglund, Saikat Chatterjee
- Abstract summary: We exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario.
Asynchronous decentralized SSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly.
We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
- Score: 49.15799302636519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we exploit an asynchronous computing framework namely ARock to
learn a deep neural network called self-size estimating feedforward neural
network (SSFN) in a decentralized scenario. Using this algorithm namely
asynchronous decentralized SSFN (dSSFN), we provide the centralized equivalent
solution under certain technical assumptions. Asynchronous dSSFN relaxes the
communication bottleneck by allowing one node activation and one side
communication, which reduces the communication overhead significantly,
consequently increasing the learning speed. We compare asynchronous dSSFN with
traditional synchronous dSSFN in the experimental results, which shows the
competitive performance of asynchronous dSSFN, especially when the
communication network is sparse.
Related papers
- LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - FAVANO: Federated AVeraging with Asynchronous NOdes [14.412305295989444]
We propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource constrained environments.
arXiv Detail & Related papers (2023-05-25T14:30:17Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Locally Asynchronous Stochastic Gradient Descent for Decentralised Deep
Learning [0.0]
Local Asynchronous SGD (LASGD) is an asynchronous decentralized algorithm that relies on All Reduce for model synchronization.
We empirically validate LASGD's performance on image classification tasks on the ImageNet dataset.
arXiv Detail & Related papers (2022-03-24T14:25:15Z) - Asynchronous Decentralized Learning over Unreliable Wireless Networks [4.630093015127539]
Decentralized learning enables edge users to collaboratively train models by exchanging information via device-to-device communication.
We propose an asynchronous decentralized gradient descent (DSGD) algorithm, which is robust to the inherent and communication failures occurring at the wireless network edge.
Experimental results corroborate our analysis, demonstrating the benefits of asynchronicity and outdated gradient information reuse in decentralized learning over unreliable wireless networks.
arXiv Detail & Related papers (2022-02-02T11:00:49Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Deep Chaos Synchronization [0.0]
We introduce a novel Deep Chaos Synchronization (DCS) system using a Convolutional Neural Network (CNN)
We also provide a novel Recurrent Neural Network (RNN)-based chaotic synchronization system for comparative analysis.
arXiv Detail & Related papers (2021-04-17T03:57:53Z) - Accelerating Neural Network Training with Distributed Asynchronous and
Selective Optimization (DASO) [0.0]
We introduce the Distributed Asynchronous and Selective Optimization (DASO) method to accelerate network training.
DASO uses a hierarchical and asynchronous communication scheme comprised of node-local and global networks.
We show that DASO yields a reduction in training time of up to 34% on classical and state-of-the-art networks.
arXiv Detail & Related papers (2021-04-12T16:02:20Z) - Federated Learning over Wireless Device-to-Device Networks: Algorithms
and Convergence Analysis [46.76179091774633]
This paper studies federated learning (FL) over wireless device-to-device (D2D) networks.
First, we introduce generic digital and analog wireless implementations of communication-efficient DSGD algorithms.
Second, under the assumptions of convexity and connectivity, we provide convergence bounds for both implementations.
arXiv Detail & Related papers (2021-01-29T17:42:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.