Development of Use-specific High Performance Cyber-Nanomaterial Optical
Detectors by Effective Choice of Machine Learning Algorithms
- URL: http://arxiv.org/abs/1912.11751v3
- Date: Fri, 3 Jan 2020 18:50:12 GMT
- Title: Development of Use-specific High Performance Cyber-Nanomaterial Optical
Detectors by Effective Choice of Machine Learning Algorithms
- Authors: Davoud Hejazi, Shuangjun Liu, Amirreza Farnoosh, Sarah Ostadabbas, and
Swastik Kar
- Abstract summary: We show the best choice of ML algorithm in a cyber-nanomaterial detector is mainly determined by specific use considerations.
We show by tracking/modeling the long-term drifts of the detector performance over large (1year) period,it is possible to improve the predictive accuracy with no need for recalibration.
- Score: 14.569246848322983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to their inherent variabilities,nanomaterial-based sensors are
challenging to translate into real-world applications,where
reliability/reproducibility is key.Recently we showed Bayesian inference can be
employed on engineered variability in layered nanomaterial-based optical
transmission filters to determine optical wavelengths with high
accuracy/precision.In many practical applications the sensing cost/speed and
long-term reliability can be equal or more important considerations.Though
various machine learning tools are frequently used on sensor/detector networks
to address these,nonetheless their effectiveness on nanomaterial-based sensors
has not been explored.Here we show the best choice of ML algorithm in a
cyber-nanomaterial detector is mainly determined by specific use
considerations,e.g.,accuracy, computational cost,speed, and resilience against
drifts/ageing effects.When sufficient data/computing resources are
provided,highest sensing accuracy can be achieved by the kNN and Bayesian
inference algorithms,but but can be computationally expensive for real-time
applications.In contrast,artificial neural networks are computationally
expensive to train,but provide the fastest result under testing conditions and
remain reasonably accurate.When data is limited,SVMs perform well even with
small training sets,while other algorithms show considerable reduction in
accuracy if data is scarce,hence,setting a lower limit on the size of required
training data.We show by tracking/modeling the long-term drifts of the detector
performance over large (1year) period,it is possible to improve the predictive
accuracy with no need for recalibration.Our research shows for the first time
if the ML algorithm is chosen specific to use-case,low-cost solution-processed
cyber-nanomaterial detectors can be practically implemented under diverse
operational requirements,despite their inherent variabilities.
Related papers
- Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - A Robust and Explainable Data-Driven Anomaly Detection Approach For
Power Electronics [56.86150790999639]
We present two anomaly detection and classification approaches, namely the Matrix Profile algorithm and anomaly transformer.
The Matrix Profile algorithm is shown to be well suited as a generalizable approach for detecting real-time anomalies in streaming time-series data.
A series of custom filters is created and added to the detector to tune its sensitivity, recall, and detection accuracy.
arXiv Detail & Related papers (2022-09-23T06:09:35Z) - To Compute or not to Compute? Adaptive Smart Sensing in
Resource-Constrained Edge Computing [1.7361161778148904]
We consider a network of smart sensors for an edge computing application that sample a time-varying signal and send updates to a base station for remote global monitoring.
Sensors are equipped with sensing and compute, and can either send raw data or process them on-board before transmission.
We propose an estimation-theoretic optimization framework that embeds both computation and communication latency.
arXiv Detail & Related papers (2022-09-05T23:46:42Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Real-Time GPU-Accelerated Machine Learning Based Multiuser Detection for
5G and Beyond [70.81551587109833]
nonlinear beamforming filters can significantly outperform linear approaches in stationary scenarios with massive connectivity.
One of the main challenges comes from the real-time implementation of these algorithms.
This paper explores the acceleration of APSM-based algorithms through massive parallelization.
arXiv Detail & Related papers (2022-01-13T15:20:45Z) - On The Reliability Of Machine Learning Applications In Manufacturing
Environments [7.467244761351822]
Continuous online monitoring of machine learning performance is required to build reliable systems.
concept and sensor drift can lead to degrading accuracy of the algorithm over time.
We assess the robustness of ML algorithms commonly used in manufacturing and show, that the accuracy strongly declines with increasing drift for all tested algorithms.
arXiv Detail & Related papers (2021-12-13T19:41:26Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - A comparative study of neural network techniques for automatic software
vulnerability detection [9.443081849443184]
Most commonly used method for detecting software vulnerabilities is static analysis.
Some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve intelligence of detection.
We have conducted extensive experiments to test the performance of the two most typical neural networks.
arXiv Detail & Related papers (2021-04-29T01:47:30Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Neural Networks Versus Conventional Filters for Inertial-Sensor-based
Attitude Estimation [1.0957528713294873]
Inertial measurement units are commonly used to estimate the attitude of moving objects.
nonlinear filter approaches have been proposed for solving the inherent sensor fusion problem.
We investigate what extent these limitations can be overcome by means of artificial neural networks.
arXiv Detail & Related papers (2020-05-14T11:59:19Z) - Real-time Out-of-distribution Detection in Learning-Enabled
Cyber-Physical Systems [1.4213973379473654]
Cyber-physical systems benefit by using machine learning components that can handle the uncertainty and variability of the real-world.
Deep neural networks, however, introduce new types of hazards that may impact system safety.
Out-of-distribution data may lead to a large error and compromise safety.
arXiv Detail & Related papers (2020-01-28T17:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.