Self-timed Reinforcement Learning using Tsetlin Machine
- URL: http://arxiv.org/abs/2109.00846v1
- Date: Thu, 2 Sep 2021 11:24:23 GMT
- Title: Self-timed Reinforcement Learning using Tsetlin Machine
- Authors: Adrian Wheeldon, Alex Yakovlev, Rishad Shafik
- Abstract summary: We present a hardware design for the learning datapath of the Tsetlin machine algorithm, along with a latency analysis of the inference datapath.
Results illustrate the advantages of asynchronous design in applications such as personalized healthcare and battery-powered internet of things devices.
- Score: 1.104960878651584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a hardware design for the learning datapath of the Tsetlin machine
algorithm, along with a latency analysis of the inference datapath. In order to
generate a low energy hardware which is suitable for pervasive artificial
intelligence applications, we use a mixture of asynchronous design techniques -
including Petri nets, signal transition graphs, dual-rail and bundled-data. The
work builds on previous design of the inference hardware, and includes an
in-depth breakdown of the automaton feedback, probability generation and
Tsetlin automata. Results illustrate the advantages of asynchronous design in
applications such as personalized healthcare and battery-powered internet of
things devices, where energy is limited and latency is an important figure of
merit. Challenges of static timing analysis in asynchronous circuits are also
addressed.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Time-Series Forecasting and Sequence Learning Using Memristor-based Reservoir System [2.6473021051027534]
We develop a memristor-based echo state network accelerator that features efficient temporal data processing and in-situ online learning.
The proposed design is benchmarked using various datasets involving real-world tasks, such as forecasting the load energy consumption and weather conditions.
It is observed that the system demonstrates reasonable robustness for device failure below 10%, which may occur due to stuck-at faults.
arXiv Detail & Related papers (2024-05-22T05:07:56Z) - Noise-Aware Training of Neuromorphic Dynamic Device Networks [2.2691986670431197]
We propose a novel, noise-aware methodology for training device networks.
Our approach employs backpropagation through time and cascade learning, allowing networks to effectively exploit the temporal properties of physical devices.
arXiv Detail & Related papers (2024-01-14T22:46:53Z) - Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Energy-frugal and Interpretable AI Hardware Design using Learning
Automata [5.514795777097036]
A new machine learning algorithm, called the Tsetlin machine, has been proposed.
In this paper, we investigate methods of energy-frugal artificial intelligence hardware design.
We show that frugal resource allocation can provide decisive energy reduction while also achieving robust and interpretable learning.
arXiv Detail & Related papers (2023-05-19T15:11:18Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - MAPLE-X: Latency Prediction with Explicit Microprocessor Prior Knowledge [87.41163540910854]
Deep neural network (DNN) latency characterization is a time-consuming process.
We propose MAPLE-X which extends MAPLE by incorporating explicit prior knowledge of hardware devices and DNN architecture latency.
arXiv Detail & Related papers (2022-05-25T11:08:20Z) - Low-Latency Asynchronous Logic Design for Inference at the Edge [0.9831489366502301]
We propose a method for reduced area and power overhead of self-timed early-propagative asynchronous inference circuits.
Due to natural resilience to timing as well as logic underpinning, the circuits are tolerant to variations in environment and supply voltage.
Average latency of the proposed circuit is reduced by 10x compared with the synchronous implementation.
arXiv Detail & Related papers (2020-12-07T00:40:52Z) - Machine Learning Link Inference of Noisy Delay-coupled Networks with
Opto-Electronic Experimental Tests [1.0766846340954257]
We devise a machine learning technique to solve the general problem of inferring network links that have time-delays.
We first train a type of machine learning system known as reservoir computing to mimic the dynamics of the unknown network.
We formulate and test a technique that uses the trained parameters of the reservoir system output layer to deduce an estimate of the unknown network structure.
arXiv Detail & Related papers (2020-10-29T00:24:13Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.