OscNet: Machine Learning on CMOS Oscillator Networks
- URL: http://arxiv.org/abs/2502.07192v1
- Date: Tue, 11 Feb 2025 02:32:32 GMT
- Title: OscNet: Machine Learning on CMOS Oscillator Networks
- Authors: Wenxiao Cai, Thomas H. Lee,
- Abstract summary: We propose a new and energy efficient machine learning framework implemented on CMOS Networks (OscNet)
We model the developmental processes of the prenatal brain's visual system using OscNet, updating based on the biologically inspired Hebbian rule.
Experimental results demonstrate that Hebbian learning pipeline on OscNet achieves performance comparable to or even surpassing traditional machine learning algorithms.
- Score: 0.0
- License:
- Abstract: Machine learning and AI have achieved remarkable advancements but at the cost of significant computational resources and energy consumption. This has created an urgent need for a novel, energy-efficient computational fabric to replace the current computing pipeline. Recently, a promising approach has emerged by mimicking spiking neurons in the brain and leveraging oscillators on CMOS for direct computation. In this context, we propose a new and energy efficient machine learning framework implemented on CMOS Oscillator Networks (OscNet). We model the developmental processes of the prenatal brain's visual system using OscNet, updating weights based on the biologically inspired Hebbian rule. This same pipeline is then directly applied to standard machine learning tasks. OscNet is a specially designed hardware and is inherently energy-efficient. Its reliance on forward propagation alone for training further enhances its energy efficiency while maintaining biological plausibility. Simulation validates our designs of OscNet architectures. Experimental results demonstrate that Hebbian learning pipeline on OscNet achieves performance comparable to or even surpassing traditional machine learning algorithms, highlighting its potential as a energy efficient and effective computational paradigm.
Related papers
- Solving Boltzmann Optimization Problems with Deep Learning [0.21485350418225244]
The Ising model shows particular promise as a future framework for highly energy efficient computation.
Ising systems are able to operate at energies approaching thermodynamic limits for energy consumption of computation.
The challenge in creating Ising-based hardware is in optimizing useful circuits that produce correct results on fundamentally nondeterministic hardware.
arXiv Detail & Related papers (2024-01-30T19:52:02Z) - Machine Learning aided Computer Architecture Design for CNN Inferencing
Systems [0.0]
We develop a technique for forecasting the power and performance of CNNs during inference, with a MAPE of 5.03% and 5.94%, respectively.
Our approach empowers computer architects to estimate power and performance in the early stages of development, reducing the necessity for numerous prototypes.
arXiv Detail & Related papers (2023-08-10T06:17:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - Learnergy: Energy-based Machine Learners [0.0]
Machine learning techniques have been broadly encouraged in the context of deep learning architectures.
An exciting algorithm denoted as Restricted Boltzmann Machine relies on energy- and probabilistic-based nature to tackle the most diverse applications, such as classification, reconstruction, and generation of images and signals.
arXiv Detail & Related papers (2020-03-16T21:14:32Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.