BackLink: Supervised Local Training with Backward Links
- URL: http://arxiv.org/abs/2205.07141v1
- Date: Sat, 14 May 2022 21:49:47 GMT
- Title: BackLink: Supervised Local Training with Backward Links
- Authors: Wenzhe Guo, Mohammed E Fouda, Ahmed M. Eltawil and Khaled N. Salama
- Abstract summary: This work proposes a novel local training algorithm, BackLink, which introduces inter- module backward dependency and allows errors to flow between modules.
Our method can lead up to a 79% reduction in memory cost and 52% in simulation runtime in ResNet110 compared to the standard BP.
- Score: 2.104758015212034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Empowered by the backpropagation (BP) algorithm, deep neural networks have
dominated the race in solving various cognitive tasks. The restricted training
pattern in the standard BP requires end-to-end error propagation, causing large
memory cost and prohibiting model parallelization. Existing local training
methods aim to resolve the training obstacle by completely cutting off the
backward path between modules and isolating their gradients to reduce memory
cost and accelerate the training process. These methods prevent errors from
flowing between modules and hence information exchange, resulting in inferior
performance. This work proposes a novel local training algorithm, BackLink,
which introduces inter-module backward dependency and allows errors to flow
between modules. The algorithm facilitates information to flow backward along
with the network. To preserve the computational advantage of local training,
BackLink restricts the error propagation length within the module. Extensive
experiments performed in various deep convolutional neural networks demonstrate
that our method consistently improves the classification performance of local
training algorithms over other methods. For example, in ResNet32 with 16 local
modules, our method surpasses the conventional greedy local training method by
4.00\% and a recent work by 1.83\% in accuracy on CIFAR10, respectively.
Analysis of computational costs reveals that small overheads are incurred in
GPU memory costs and runtime on multiple GPUs. Our method can lead up to a 79\%
reduction in memory cost and 52\% in simulation runtime in ResNet110 compared
to the standard BP. Therefore, our method could create new opportunities for
improving training algorithms towards better efficiency and biological
plausibility.
Related papers
- Towards Interpretable Deep Local Learning with Successive Gradient Reconciliation [70.43845294145714]
Relieving the reliance of neural network training on a global back-propagation (BP) has emerged as a notable research topic.
We propose a local training strategy that successively regularizes the gradient reconciliation between neighboring modules.
Our method can be integrated into both local-BP and BP-free settings.
arXiv Detail & Related papers (2024-06-07T19:10:31Z) - Unlocking Deep Learning: A BP-Free Approach for Parallel Block-Wise
Training of Neural Networks [9.718519843862937]
We introduce a block-wise BP-free (BWBPF) neural network that leverages local error signals to optimize sub-neural networks separately.
Our experimental results consistently show that this approach can identify transferable decoupled architectures for VGG and ResNet variations.
arXiv Detail & Related papers (2023-12-20T08:02:33Z) - Go beyond End-to-End Training: Boosting Greedy Local Learning with
Context Supply [0.12187048691454236]
greedy local learning partitions the network into gradient-isolated modules and trains supervisely based on local preliminary losses.
As the number of segmentations of the gradient-isolated module increases, the performance of the local learning scheme degrades substantially.
We propose a ContSup scheme, which incorporates context supply between isolated modules to compensate for information loss.
arXiv Detail & Related papers (2023-12-12T10:25:31Z) - Instant Complexity Reduction in CNNs using Locality-Sensitive Hashing [50.79602839359522]
We propose HASTE (Hashing for Tractable Efficiency), a parameter-free and data-free module that acts as a plug-and-play replacement for any regular convolution module.
We are able to drastically compress latent feature maps without sacrificing much accuracy by using locality-sensitive hashing (LSH)
In particular, we are able to instantly drop 46.72% of FLOPs while only losing 1.25% accuracy by just swapping the convolution modules in a ResNet34 on CIFAR-10 for our HASTE module.
arXiv Detail & Related papers (2023-09-29T13:09:40Z) - Learning a Consensus Sub-Network with Polarization Regularization and
One Pass Training [3.2214522506924093]
Pruning schemes create extra overhead either by iterative training and fine-tuning for static pruning or repeated computation of a dynamic pruning graph.
We propose a new parameter pruning strategy for learning a lighter-weight sub-network that minimizes the energy cost while maintaining comparable performance to the fully parameterised network on given downstream tasks.
Our results on CIFAR-10 and CIFAR-100 suggest that our scheme can remove 50% of connections in deep networks with less than 1% reduction in classification accuracy.
arXiv Detail & Related papers (2023-02-17T09:37:17Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Efficient Training of Spiking Neural Networks with Temporally-Truncated
Local Backpropagation through Time [1.926678651590519]
Training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions.
This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm.
arXiv Detail & Related papers (2021-12-13T07:44:58Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - RNN Training along Locally Optimal Trajectories via Frank-Wolfe
Algorithm [50.76576946099215]
We propose a novel and efficient training method for RNNs by iteratively seeking a local minima on the loss surface within a small region.
We develop a novel RNN training method that, surprisingly, even with the additional cost, the overall training cost is empirically observed to be lower than back-propagation.
arXiv Detail & Related papers (2020-10-12T01:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.