Benchmarking the Accuracy and Robustness of Feedback Alignment
Algorithms
- URL: http://arxiv.org/abs/2108.13446v1
- Date: Mon, 30 Aug 2021 18:02:55 GMT
- Title: Benchmarking the Accuracy and Robustness of Feedback Alignment
Algorithms
- Authors: Albert Jim\'enez Sanfiz, Mohamed Akrout
- Abstract summary: Backpropagation is the default algorithm for training deep neural networks due to its simplicity, efficiency and high convergence rate.
In recent years, more biologically plausible learning methods have been proposed.
BioTorch is a software framework to create, train, and benchmark biologically motivated neural networks.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Backpropagation is the default algorithm for training deep neural networks
due to its simplicity, efficiency and high convergence rate. However, its
requirements make it impossible to be implemented in a human brain. In recent
years, more biologically plausible learning methods have been proposed. Some of
these methods can match backpropagation accuracy, and simultaneously provide
other extra benefits such as faster training on specialized hardware (e.g.,
ASICs) or higher robustness against adversarial attacks. While the interest in
the field is growing, there is a necessity for open-source libraries and
toolkits to foster research and benchmark algorithms. In this paper, we present
BioTorch, a software framework to create, train, and benchmark biologically
motivated neural networks. In addition, we investigate the performance of
several feedback alignment methods proposed in the literature, thereby
unveiling the importance of the forward and backward weight initialization and
optimizer choice. Finally, we provide a novel robustness study of these methods
against state-of-the-art white and black-box adversarial attacks.
Related papers
- Center-Sensitive Kernel Optimization for Efficient On-Device Incremental Learning [88.78080749909665]
Current on-device training methods just focus on efficient training without considering the catastrophic forgetting.
This paper proposes a simple but effective edge-friendly incremental learning framework.
Our method achieves average accuracy boost of 38.08% with even less memory and approximate computation.
arXiv Detail & Related papers (2024-06-13T05:49:29Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - A Novel Method for improving accuracy in neural network by reinstating
traditional back propagation technique [0.0]
We propose a novel instant parameter update methodology that eliminates the need for computing gradients at each layer.
Our approach accelerates learning, avoids the vanishing gradient problem, and outperforms state-of-the-art methods on benchmark data sets.
arXiv Detail & Related papers (2023-08-09T16:41:00Z) - Forward-Forward Algorithm for Hyperspectral Image Classification: A
Preliminary Study [0.0]
Forward-forward algorithm (FFA) computes local goodness functions to optimize network parameters.
This study investigates the application of FFA for hyperspectral image classification.
arXiv Detail & Related papers (2023-07-01T05:39:28Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Neural Network Adversarial Attack Method Based on Improved Genetic
Algorithm [0.0]
We propose a neural network adversarial attack method based on an improved genetic algorithm.
The method does not need the internal structure and parameter information of the neural network model.
arXiv Detail & Related papers (2021-10-05T04:46:16Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Robust Deep Learning as Optimal Control: Insights and Convergence
Guarantees [19.28405674700399]
adversarial examples during training is a popular defense mechanism against adversarial attacks.
By interpreting the min-max problem as an optimal control problem, it has been shown that one can exploit the compositional structure of neural networks.
We provide the first convergence analysis of this adversarial training algorithm by combining techniques from robust optimal control and inexact methods in optimization.
arXiv Detail & Related papers (2020-05-01T21:26:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.