Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous
Distributed Learning
- URL: http://arxiv.org/abs/2106.06401v1
- Date: Fri, 11 Jun 2021 13:55:17 GMT
- Title: Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous
Distributed Learning
- Authors: Eugene Belilovsky (MILA), Louis Leconte (MLIA, CMAP), Lucas Caccia
(MILA), Michael Eickenberg, Edouard Oyallon (MLIA)
- Abstract summary: We consider a simple alternative based on minimal feedback, which we call Decoupled Greedy Learning (DGL)
It is based on a classic greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification.
We show theoretically and empirically that this approach converges and compare it to the sequential solvers.
- Score: 3.7722254371820987
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A commonly cited inefficiency of neural network training using
back-propagation is the update locking problem: each layer must wait for the
signal to propagate through the full network before updating. Several
alternatives that can alleviate this issue have been proposed. In this context,
we consider a simple alternative based on minimal feedback, which we call
Decoupled Greedy Learning (DGL). It is based on a classic greedy relaxation of
the joint training objective, recently shown to be effective in the context of
Convolutional Neural Networks (CNNs) on large-scale image classification. We
consider an optimization of this objective that permits us to decouple the
layer training, allowing for layers or modules in networks to be trained with a
potentially linear parallelization. With the use of a replay buffer we show
that this approach can be extended to asynchronous settings, where modules can
operate and continue to update with possibly large communication delays. To
address bandwidth and memory issues we propose an approach based on online
vector quantization. This allows to drastically reduce the communication
bandwidth between modules and required memory for replay buffers. We show
theoretically and empirically that this approach converges and compare it to
the sequential solvers. We demonstrate the effectiveness of DGL against
alternative approaches on the CIFAR-10 dataset and on the large-scale ImageNet
dataset.
Related papers
- SIGMA:Sinkhorn-Guided Masked Video Modeling [69.31715194419091]
Sinkhorn-guided Masked Video Modelling ( SIGMA) is a novel video pretraining method.
We distribute features of space-time tubes evenly across a limited number of learnable clusters.
Experimental results on ten datasets validate the effectiveness of SIGMA in learning more performant, temporally-aware, and robust video representations.
arXiv Detail & Related papers (2024-07-22T08:04:09Z) - Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise
Training of Neural Networks [9.718519843862937]
We introduce a block-wise BP-free (BWBPF) neural network that leverages local error signals to optimize sub-neural networks separately.
Our experimental results consistently show that this approach can identify transferable decoupled architectures for VGG and ResNet variations.
arXiv Detail & Related papers (2023-12-20T08:02:33Z) - Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback [12.946419909506883]
We create a closed-loop system that makes use of a test-time feedback signal to adapt a network on the fly.
We show that this loop can be effectively implemented using a learning-based function, which realizes an amortized for the network.
This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably more flexible and orders of magnitude faster than the baselines.
arXiv Detail & Related papers (2023-09-27T16:20:39Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.