Comparison of Update and Genetic Training Algorithms in a Memristor
Crossbar Perceptron
- URL: http://arxiv.org/abs/2012.06027v2
- Date: Fri, 18 Feb 2022 17:06:37 GMT
- Title: Comparison of Update and Genetic Training Algorithms in a Memristor
Crossbar Perceptron
- Authors: Kyle N. Edwards and Xiao Shen
- Abstract summary: We investigate whether certain training algorithms may be more resilient to particular hardware failure modes.
We implement two training algorithms -- a local update scheme and a genetic algorithm -- in a simulated memristor crossbar.
We demonstrate that there is a clear distinction between the two algorithms in several measures of the rate of failure to train.
- Score: 4.649999862713524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memristor-based computer architectures are becoming more attractive as a
possible choice of hardware for the implementation of neural networks. However,
at present, memristor technologies are susceptible to a variety of failure
modes, a serious concern in any application where regular access to the
hardware may not be expected or even possible. In this study, we investigate
whether certain training algorithms may be more resilient to particular
hardware failure modes, and therefore more suitable for use in those
applications. We implement two training algorithms -- a local update scheme and
a genetic algorithm -- in a simulated memristor crossbar, and compare their
ability to train for a simple image classification task as an increasing number
of memristors fail to adjust their conductance. We demonstrate that there is a
clear distinction between the two algorithms in several measures of the rate of
failure to train.
Related papers
- Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms [80.37846867546517]
We show how to train eight different neural networks with custom objectives.
We exploit their second-order information via their empirical Fisherssian matrices.
We apply Loss Lossiable algorithms to achieve significant improvements for less differentiable algorithms.
arXiv Detail & Related papers (2024-10-24T18:02:11Z) - Training Neural Networks with Internal State, Unconstrained
Connectivity, and Discrete Activations [66.53734987585244]
True intelligence may require the ability of a machine learning model to manage internal state.
We show that we have not yet discovered the most effective algorithms for training such models.
We present one attempt to design such a training algorithm, applied to an architecture with binary activations and only a single matrix of weights.
arXiv Detail & Related papers (2023-12-22T01:19:08Z) - Benchmarking Neural Network Training Algorithms [46.39165332979669]
Training algorithms are an essential part of every deep learning pipeline.
As a community, we are unable to reliably identify training algorithm improvements.
We introduce a new, competitive, time-to-result benchmark using multiple workloads running on fixed hardware.
arXiv Detail & Related papers (2023-06-12T15:21:02Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Binary stochasticity enabled highly efficient neuromorphic deep learning
achieves better-than-software accuracy [17.11946381948498]
Deep learning needs high-precision handling of forwarding signals, backpropagating errors, and updating weights.
It is challenging to implement deep learning in hardware systems that use noisy analog memristors as artificial synapses.
We propose a binary learning algorithm that modifies all elementary neural network operations.
arXiv Detail & Related papers (2023-04-25T14:38:36Z) - Low-rank lottery tickets: finding efficient low-rank neural networks via
matrix differential equations [2.3488056916440856]
We propose a novel algorithm to find efficient low-rankworks.
Theseworks are determined and adapted already during the training phase.
Our method automatically and dynamically adapts the ranks during training to achieve a desired approximation accuracy.
arXiv Detail & Related papers (2022-05-26T18:18:12Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - Gradients are Not All You Need [28.29420710601308]
We discuss a common chaos based failure mode which appears in a variety of differentiable circumstances.
We trace this failure to the spectrum of the Jacobian of the system under study, and provide criteria for when a practitioner might expect this failure to spoil their differentiation based optimization algorithms.
arXiv Detail & Related papers (2021-11-10T16:51:04Z) - Training Generative Adversarial Networks in One Stage [58.983325666852856]
We introduce a general training scheme that enables training GANs efficiently in only one stage.
We show that the proposed method is readily applicable to other adversarial-training scenarios, such as data-free knowledge distillation.
arXiv Detail & Related papers (2021-02-28T09:03:39Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Memristor Hardware-Friendly Reinforcement Learning [14.853739554366351]
We propose a memristive neuromorphic hardware implementation for the actor-critic algorithm in reinforcement learning.
We consider the task of balancing an inverted pendulum, a classical problem in both RL and control theory.
We believe that this study shows the promise of using memristor-based hardware neural networks for handling complex tasks through in-situ reinforcement learning.
arXiv Detail & Related papers (2020-01-20T01:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.