Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation
- URL: http://arxiv.org/abs/2308.12053v2
- Date: Wed, 05 Feb 2025 20:37:52 GMT
- Title: Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation
- Authors: Leander Weber, Jim Berend, Moritz Weckbecker, Alexander Binder, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin,
- Abstract summary: We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
- Score: 49.44309457870649
- License:
- Abstract: Gradient-based optimization has been a cornerstone of machine learning enabling the vast advances of AI development over the past decades. However, since this type of optimization requires differentiation, it reduces flexibility in the choice of model and objective. With recent evidence of the benefits of non-differentiable (e.g. neuromorphic) architectures over classical models, such constraints can become limiting in the future. We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors utilizing methods from the domain of explainability to decompose a reward to individual neurons based on their respective contributions to solving a given task without imposing any differentiability requirements. Leveraging these neuron-wise rewards, our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones. While having comparable computational complexity to gradient descent, LFP offers the advantage that it obtains sparse models due to an implicit weight scaling. We establish the convergence of LFP theoretically and empirically, demonstrating its effectiveness on various models and datasets. We further investigate two applications for LFP: Firstly, neural network pruning, and secondly, the optimization of neuromorphic architectures such as Heaviside step function activated Spiking Neural Networks (SNNs). In the first setting, LFP naturally generates sparse models that are easily prunable and thus efficiently encode and compute information. In the second setting, LFP achieves comparable performance to surrogate gradient descent, but provides approximation-free training, which eases the implementation on neuromorphic hardware. Consequently, LFP combines efficiency in terms of computation and representation with flexibility w.r.t. model architecture and objective function. Our code is available.
Related papers
- Finite Element Neural Network Interpolation. Part I: Interpretable and Adaptive Discretization for Solving PDEs [44.99833362998488]
We present a sparse neural network architecture extending previous work on Embedded Finite Element Neural Networks (EFENN)
Due to their mesh-based structure, EFENN requires significantly fewer trainable parameters than fully connected neural networks.
Our FENNI framework, within the EFENN framework, brings improvements to the HiDeNN approach.
arXiv Detail & Related papers (2024-12-07T18:31:17Z) - Advancing Neural Network Performance through Emergence-Promoting Initialization Scheme [0.0]
Emergence in machine learning refers to the spontaneous appearance of capabilities that arise from the scale and structure of training data.
We introduce a novel yet straightforward neural network initialization scheme that aims at achieving greater potential for emergence.
We demonstrate substantial improvements in both model accuracy and training speed, with and without batch normalization.
arXiv Detail & Related papers (2024-07-26T18:56:47Z) - Explicit Foundation Model Optimization with Self-Attentive Feed-Forward
Neural Units [4.807347156077897]
Iterative approximation methods using backpropagation enable the optimization of neural networks, but they remain computationally expensive when used at scale.
This paper presents an efficient alternative for optimizing neural networks that reduces the costs of scaling neural networks and provides high-efficiency optimizations for low-resource applications.
arXiv Detail & Related papers (2023-11-13T17:55:07Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - An Adaptive and Stability-Promoting Layerwise Training Approach for Sparse Deep Neural Network Architecture [0.0]
This work presents a two-stage adaptive framework for developing deep neural network (DNN) architectures that generalize well for a given training data set.
In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers.
We introduce a epsilon-delta stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a epsilon-delta stability-promoting algorithm.
arXiv Detail & Related papers (2022-11-13T09:51:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.