Reservoir-Based Distributed Machine Learning for Edge Operation
- URL: http://arxiv.org/abs/2104.00751v1
- Date: Thu, 1 Apr 2021 20:06:40 GMT
- Title: Reservoir-Based Distributed Machine Learning for Edge Operation
- Authors: Silvija Kokalj-Filipovic, Paul Toliver, William Johnson, Rob Miller
- Abstract summary: We introduce a novel design for in-situ training of machine learning algorithms built into smart sensors.
We illustrate distributed training scenarios using radio frequency (RF) spectrum sensors.
- Score: 0.6451914896767135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel design for in-situ training of machine learning
algorithms built into smart sensors, and illustrate distributed training
scenarios using radio frequency (RF) spectrum sensors. Current RF sensors at
the Edge lack the computational resources to support practical, in-situ
training for intelligent signal classification. We propose a solution using
Deepdelay Loop Reservoir Computing (DLR), a processing architecture that
supports machine learning algorithms on resource-constrained edge-devices by
leveraging delayloop reservoir computing in combination with innovative
hardware. DLR delivers reductions in form factor, hardware complexity and
latency, compared to the State-ofthe- Art (SoA) neural nets. We demonstrate DLR
for two applications: RF Specific Emitter Identification (SEI) and wireless
protocol recognition. DLR enables mobile edge platforms to authenticate and
then track emitters with fast SEI retraining. Once delay loops separate the
data classes, traditionally complex, power-hungry classification models are no
longer needed for the learning process. Yet, even with simple classifiers such
as Ridge Regression (RR), the complexity grows at least quadratically with the
input size. DLR with a RR classifier exceeds the SoA accuracy, while further
reducing power consumption by leveraging the architecture of parallel (split)
loops. To authenticate mobile devices across large regions, DLR can be trained
in a distributed fashion with very little additional processing and a small
communication cost, all while maintaining accuracy. We illustrate how to merge
locally trained DLR classifiers in use cases of interest.
Related papers
- Efficient Asynchronous Federated Learning with Sparsification and
Quantization [55.6801207905772]
Federated Learning (FL) is attracting more and more attention to collaboratively train a machine learning model without transferring raw data.
FL generally exploits a parameter server and a large number of edge devices during the whole process of the model training.
We propose TEASQ-Fed to exploit edge devices to asynchronously participate in the training process by actively applying for tasks.
arXiv Detail & Related papers (2023-12-23T07:47:07Z) - Efficient Model Adaptation for Continual Learning at the Edge [15.334881190102895]
Most machine learning (ML) systems assume stationary and matching data distributions during training and deployment.
Data distributions often shift over time due to changes in environmental factors, sensor characteristics, and task-of-interest.
This paper presents theAdaptor-Reconfigurator (EAR) framework for efficient continual learning under domain shifts.
arXiv Detail & Related papers (2023-08-03T23:55:17Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Complexity-aware Adaptive Training and Inference for Edge-Cloud
Distributed AI Systems [9.273593723275544]
IoT and machine learning applications create large amounts of data that require real-time processing.
We propose a distributed AI system to exploit both the edge and the cloud for training and inference.
arXiv Detail & Related papers (2021-09-14T05:03:54Z) - Reservoir Based Edge Training on RF Data To Deliver Intelligent and
Efficient IoT Spectrum Sensors [0.6451914896767135]
We propose a processing architecture that supports general machine learning algorithms on compact mobile devices.
Deep Delay Loop Reservoir Computing (DLR) delivers reductions in form factor, hardware complexity and latency, compared to the State-of-the-Art (SoA)
We present DLR architectures composed of multiple smaller loops whose state vectors are linearly combined to create a lower dimensional input into Ridge regression.
arXiv Detail & Related papers (2021-04-01T20:08:01Z) - Enabling Incremental Training with Forward Pass for Edge Devices [0.0]
We introduce a method using evolutionary strategy (ES) that can partially retrain the network enabling it to adapt to changes and recover after an error has occurred.
This technique enables training on an inference-only hardware without the need to use backpropagation and with minimal resource overhead.
arXiv Detail & Related papers (2021-03-25T17:43:04Z) - Deep Delay Loop Reservoir Computing for Specific Emitter Identification [0.5906031288935515]
Current AI systems at the tactical edge lack the computational resources to support in-situ training and inference for situational awareness.
We propose a solution through Deep delay Loop Reservoir Computing (DLR), a processing architecture supporting general machine learning algorithms on compact mobile devices.
arXiv Detail & Related papers (2020-10-13T19:32:38Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - A Compressive Sensing Approach for Federated Learning over Massive MIMO
Communication Systems [82.2513703281725]
Federated learning is a privacy-preserving approach to train a global model at a central server by collaborating with wireless devices.
We present a compressive sensing approach for federated learning over massive multiple-input multiple-output communication systems.
arXiv Detail & Related papers (2020-03-18T05:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.