Consensus Driven Learning
- URL: http://arxiv.org/abs/2005.10300v1
- Date: Wed, 20 May 2020 18:24:19 GMT
- Title: Consensus Driven Learning
- Authors: Kyle Crandall and Dustin Webb
- Abstract summary: We propose a new method of distributed, decentralized learning that allows a network of nodes to coordinate their training using asynchronous updates over an unreliable network.
This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes.
We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As the complexity of our neural network models grow, so too do the data and
computation requirements for successful training. One proposed solution to this
problem is training on a distributed network of computational devices, thus
distributing the computational and data storage loads. This strategy has
already seen some adoption by the likes of Google and other companies. In this
paper we propose a new method of distributed, decentralized learning that
allows a network of computation nodes to coordinate their training using
asynchronous updates over an unreliable network while only having access to a
local dataset. This is achieved by taking inspiration from Distributed
Averaging Consensus algorithms to coordinate the various nodes. Sharing the
internal model instead of the training data allows the original raw data to
remain with the computation node. The asynchronous nature and lack of
centralized coordination allows this paradigm to function with limited
communication requirements. We demonstrate our method on the MNIST, Fashion
MNIST, and CIFAR10 datasets. We show that our coordination method allows models
to be learned on highly biased datasets, and in the presence of intermittent
communication failure.
Related papers
- Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Online Distributed Learning with Quantized Finite-Time Coordination [0.4910937238451484]
In our setting a set of agents need to cooperatively train a learning model from streaming data.
We propose a distributed algorithm that relies on a quantized, finite-time coordination protocol.
We analyze the performance of the proposed algorithm in terms of the mean distance from the online solution.
arXiv Detail & Related papers (2023-07-13T08:36:15Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - RelaySum for Decentralized Deep Learning on Heterogeneous Data [71.36228931225362]
In decentralized machine learning, workers compute model updates on their local data.
Because the workers only communicate with few neighbors without central coordination, these updates propagate progressively over the network.
This paradigm enables distributed training on networks without all-to-all connectivity, helping to protect data privacy as well as to reduce the communication cost of distributed training in data centers.
arXiv Detail & Related papers (2021-10-08T14:55:32Z) - Decentralized federated learning of deep neural networks on non-iid data [0.6335848702857039]
We tackle the non-problem of learning a personalized deep learning model in a decentralized setting.
We propose a method named Performance-Based Neighbor Selection (PENS) where clients with similar data detect each other and cooperate.
PENS is able to achieve higher accuracies as compared to strong baselines.
arXiv Detail & Related papers (2021-07-18T19:05:44Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Unsupervised Learning for Asynchronous Resource Allocation in Ad-hoc
Wireless Networks [122.42812336946756]
We design an unsupervised learning method based on Aggregation Graph Neural Networks (Agg-GNNs)
We capture the asynchrony by modeling the activation pattern as a characteristic of each node and train a policy-based resource allocation method.
arXiv Detail & Related papers (2020-11-05T03:38:36Z) - A Low Complexity Decentralized Neural Net with Centralized Equivalence
using Layer-wise Learning [49.15799302636519]
We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers)
In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns.
We show that it is possible to achieve equivalent learning performance as if the data is available in a single place.
arXiv Detail & Related papers (2020-09-29T13:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.