Neko: a Library for Exploring Neuromorphic Learning Rules
- URL: http://arxiv.org/abs/2105.00324v1
- Date: Sat, 1 May 2021 18:50:32 GMT
- Title: Neko: a Library for Exploring Neuromorphic Learning Rules
- Authors: Zixuan Zhao, Nathan Wycoff, Neil Getty, Rick Stevens, Fangfang Xia
- Abstract summary: Neko is a modular library for neuromorphic learning algorithms.
It can replicate state-of-the-art algorithms and, in one case, lead to significant outperformance in accuracy and speed.
Neko is an open source Python library that supports PyTorch and backends.
- Score: 0.3499870393443268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of neuromorphic computing is in a period of active exploration.
While many tools have been developed to simulate neuronal dynamics or convert
deep networks to spiking models, general software libraries for learning rules
remain underexplored. This is partly due to the diverse, challenging nature of
efforts to design new learning rules, which range from encoding methods to
gradient approximations, from population approaches that mimic the Bayesian
brain to constrained learning algorithms deployed on memristor crossbars. To
address this gap, we present Neko, a modular, extensible library with a focus
on aiding the design of new learning algorithms. We demonstrate the utility of
Neko in three exemplar cases: online local learning, probabilistic learning,
and analog on-device learning. Our results show that Neko can replicate the
state-of-the-art algorithms and, in one case, lead to significant
outperformance in accuracy and speed. Further, it offers tools including
gradient comparison that can help develop new algorithmic variants. Neko is an
open source Python library that supports PyTorch and TensorFlow backends.
Related papers
- RelChaNet: Neural Network Feature Selection using Relative Change Scores [0.0]
We introduce RelChaNet, a novel and lightweight feature selection algorithm that uses neuron pruning and regrowth in the input layer of a dense neural network.
Our approach generally outperforms the current state-of-the-art methods, and in particular improves the average accuracy by 2% on the MNIST dataset.
arXiv Detail & Related papers (2024-10-03T09:56:39Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Proximal Mean Field Learning in Shallow Neural Networks [0.4972323953932129]
We propose a custom learning algorithm for shallow neural networks with single hidden layer having infinite width.
We realize mean field learning as a computational algorithm, rather than as an analytical tool.
Our algorithm performs gradient descent of the free energy associated with the risk functional.
arXiv Detail & Related papers (2022-10-25T10:06:26Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - PyTorch Metric Learning [37.03614011735927]
PyTorch Metric Learning is an open source library that aims to remove this barrier for both researchers and practitioners.
The modular and flexible design allows users to easily try out different combinations of algorithms in their existing code.
It also comes with complete train/test, for users who want results fast.
arXiv Detail & Related papers (2020-08-20T19:08:56Z) - A Fortran-Keras Deep Learning Bridge for Scientific Computing [6.768544973019004]
We introduce a software library, the Fortran-Keras Bridge (FKB)
The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles.
The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation.
arXiv Detail & Related papers (2020-04-14T15:10:09Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z) - On the distance between two neural networks and the stability of
learning [59.62047284234815]
This paper relates parameter distance to gradient breakdown for a broad class of nonlinear compositional functions.
The analysis leads to a new distance function called deep relative trust and a descent lemma for neural networks.
arXiv Detail & Related papers (2020-02-09T19:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.