Deep Delay Loop Reservoir Computing for Specific Emitter Identification
- URL: http://arxiv.org/abs/2010.06649v1
- Date: Tue, 13 Oct 2020 19:32:38 GMT
- Title: Deep Delay Loop Reservoir Computing for Specific Emitter Identification
- Authors: Silvija Kokalj-Filipovic and Paul Toliver and William Johnson and
Raymond R. Hoare II and Joseph J. Jezak
- Abstract summary: Current AI systems at the tactical edge lack the computational resources to support in-situ training and inference for situational awareness.
We propose a solution through Deep delay Loop Reservoir Computing (DLR), a processing architecture supporting general machine learning algorithms on compact mobile devices.
- Score: 0.5906031288935515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current AI systems at the tactical edge lack the computational resources to
support in-situ training and inference for situational awareness, and it is not
always practical to leverage backhaul resources due to security, bandwidth, and
mission latency requirements. We propose a solution through Deep delay Loop
Reservoir Computing (DLR), a processing architecture supporting general machine
learning algorithms on compact mobile devices by leveraging delay-loop (DL)
reservoir computing in combination with innovative photonic hardware exploiting
the inherent speed, and spatial, temporal and wavelength-based processing
diversity of signals in the optical domain. DLR delivers reductions in form
factor, hardware complexity, power consumption and latency, compared to
State-of-the-Art . DLR can be implemented with a single photonic DL and a few
electro-optical components. In certain cases multiple DL layers increase
learning capacity of the DLR with no added latency. We demonstrate the
advantages of DLR on the application of RF Specific Emitter Identification.
Related papers
- Dynamic Spectrum Access for Ambient Backscatter Communication-assisted D2D Systems with Quantum Reinforcement Learning [68.63990729719369]
The wireless spectrum is becoming scarce, resulting in low spectral efficiency for D2D communications.
This paper aims to integrate the ambient backscatter communication technology into D2D devices to allow them to backscatter ambient RF signals.
We develop a novel quantum reinforcement learning (RL) algorithm that can achieve a faster convergence rate with fewer training parameters.
arXiv Detail & Related papers (2024-10-23T15:36:43Z) - SCATTER: Algorithm-Circuit Co-Sparse Photonic Accelerator with Thermal-Tolerant, Power-Efficient In-situ Light Redistribution [7.378742476019604]
Photonic computing has emerged as a promising solution for accelerating computation-intensive artificial intelligence (AI) workloads.
However, limited reconfigurability, high electrical-optical conversion cost, and thermal sensitivity limit the deployment of current optical analog computing engines to support power-restricted, performance-sensitive AI workloads at scale.
We propose SCATTER, a novel algorithm-circuit co-sparse photonic accelerator featuring dynamically reconfigurable signal path via thermal-tolerant, power-efficient in-situ light redistribution and power gating.
arXiv Detail & Related papers (2024-07-07T22:57:44Z) - Efficient and accurate neural field reconstruction using resistive memory [52.68088466453264]
Traditional signal reconstruction methods on digital computers face both software and hardware challenges.
We propose a systematic approach with software-hardware co-optimizations for signal reconstruction from sparse inputs.
This work advances the AI-driven signal restoration technology and paves the way for future efficient and robust medical AI and 3D vision applications.
arXiv Detail & Related papers (2024-04-15T09:33:09Z) - TeMPO: Efficient Time-Multiplexed Dynamic Photonic Tensor Core for Edge
AI with Compact Slow-Light Electro-Optic Modulator [44.74560543672329]
We present a time-multiplexed dynamic photonic tensor accelerator, dubbed TeMPO, with cross-layer device/circuit/architecture customization.
We achieve a 368.6 TOPS peak performance, 22.3 TOPS/W energy efficiency, and 1.2 TOPS/mm$2$ compute density.
This work signifies the power of cross-layer co-design and domain-specific customization, paving the way for future electronic-photonic accelerators.
arXiv Detail & Related papers (2024-02-12T03:40:32Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Neural Network Methods for Radiation Detectors and Imaging [1.6395318070400589]
Recent advances in machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware.
We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration.
arXiv Detail & Related papers (2023-11-09T20:21:51Z) - FCL-GAN: A Lightweight and Real-Time Baseline for Unsupervised Blind
Image Deblurring [72.43250555622254]
We propose a lightweight and real-time unsupervised BID baseline, termed Frequency-domain Contrastive Loss Constrained Lightweight CycleGAN.
FCL-GAN has attractive properties, i.e., no image domain limitation, no image resolution limitation, 25x lighter than SOTA, and 5x faster than SOTA.
Experiments on several image datasets demonstrate the effectiveness of FCL-GAN in terms of performance, model size and reference time.
arXiv Detail & Related papers (2022-04-16T15:08:03Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Reservoir Based Edge Training on RF Data To Deliver Intelligent and
Efficient IoT Spectrum Sensors [0.6451914896767135]
We propose a processing architecture that supports general machine learning algorithms on compact mobile devices.
Deep Delay Loop Reservoir Computing (DLR) delivers reductions in form factor, hardware complexity and latency, compared to the State-of-the-Art (SoA)
We present DLR architectures composed of multiple smaller loops whose state vectors are linearly combined to create a lower dimensional input into Ridge regression.
arXiv Detail & Related papers (2021-04-01T20:08:01Z) - Reservoir-Based Distributed Machine Learning for Edge Operation [0.6451914896767135]
We introduce a novel design for in-situ training of machine learning algorithms built into smart sensors.
We illustrate distributed training scenarios using radio frequency (RF) spectrum sensors.
arXiv Detail & Related papers (2021-04-01T20:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.