A comparison between Recurrent Neural Networks and classical machine
learning approaches In Laser induced breakdown spectroscopy
- URL: http://arxiv.org/abs/2304.08500v1
- Date: Sun, 16 Apr 2023 08:26:11 GMT
- Title: A comparison between Recurrent Neural Networks and classical machine
learning approaches In Laser induced breakdown spectroscopy
- Authors: Fatemeh Rezaei, Pouriya Khaliliyan, Mohsen Rezaei, Parvin Karimi,
Behnam Ashrafkhani
- Abstract summary: Recurrent Neural Networks are classes of Artificial Neural Networks that establish connections between different nodes.
Laser induced breakdown spectroscopy (LIBS) technique is used for quantitative analysis of aluminum alloys by different Recurrent Neural Network architecture.
- Score: 0.8399688944263843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent Neural Networks are classes of Artificial Neural Networks that
establish connections between different nodes form a directed or undirected
graph for temporal dynamical analysis. In this research, the laser induced
breakdown spectroscopy (LIBS) technique is used for quantitative analysis of
aluminum alloys by different Recurrent Neural Network (RNN) architecture. The
fundamental harmonic (1064 nm) of a nanosecond Nd:YAG laser pulse is employed
to generate the LIBS plasma for the prediction of constituent concentrations of
the aluminum standard samples. Here, Recurrent Neural Networks based on
different networks, such as Long Short Term Memory (LSTM), Gated Recurrent Unit
(GRU), Simple Recurrent Neural Network (Simple RNN), and as well as Recurrent
Convolutional Networks comprising of Conv-SimpleRNN, Conv-LSTM and Conv-GRU are
utilized for concentration prediction. Then a comparison is performed among
prediction by classical machine learning methods of support vector regressor
(SVR), the Multi Layer Perceptron (MLP), Decision Tree algorithm, Gradient
Boosting Regression (GBR), Random Forest Regression (RFR), Linear Regression,
and k-Nearest Neighbor (KNN) algorithm. Results showed that the machine
learning tools based on Convolutional Recurrent Networks had the best
efficiencies in prediction of the most of the elements among other multivariate
methods.
Related papers
- Parallel-in-Time Solutions with Random Projection Neural Networks [0.07282584715927627]
This paper considers one of the fundamental parallel-in-time methods for the solution of ordinary differential equations, Parareal, and extends it by adopting a neural network as a coarse propagator.
We provide a theoretical analysis of the convergence properties of the proposed algorithm and show its effectiveness for several examples, including Lorenz and Burgers' equations.
arXiv Detail & Related papers (2024-08-19T07:32:41Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Deep Convolutional Learning-Aided Detector for Generalized Frequency
Division Multiplexing with Index Modulation [0.0]
The proposed method first pre-processes the received signal by using a zero-forcing (ZF) detector and then uses a neural network consisting of a convolutional neural network (CNN) followed by a fully-connected neural network (FCNN)
The FCNN part uses only two fully-connected layers, which can be adapted to yield a trade-off between complexity and bit error rate (BER) performance.
It has been demonstrated that the proposed deep convolutional neural network-based detection and demodulation scheme provides better BER performance compared to ZF detector with a reasonable complexity increase.
arXiv Detail & Related papers (2022-02-06T22:18:42Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops [0.0]
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
arXiv Detail & Related papers (2020-11-19T21:45:58Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - The Power of Linear Recurrent Neural Networks [1.124958340749622]
We show how autoregressive linear, i.e., linearly activated recurrent neural networks (LRNNs) can approximate any time-dependent function f(t)
LRNNs outperform the previous state-of-the-art for the MSO task with a minimal number of units.
arXiv Detail & Related papers (2018-02-09T15:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.