Broad Learning System with Takagi-Sugeno Fuzzy Subsystem for Tobacco
Origin Identification based on Near Infrared Spectroscopy
- URL: http://arxiv.org/abs/2301.00126v1
- Date: Sat, 31 Dec 2022 05:38:37 GMT
- Title: Broad Learning System with Takagi-Sugeno Fuzzy Subsystem for Tobacco
Origin Identification based on Near Infrared Spectroscopy
- Authors: Di Wang, Simon X. Yang
- Abstract summary: A novel broad learning system with Takagi-Sugeno (TS) fuzzy subsystem is proposed for rapid identification of tobacco origin.
The proposed method can achieve the highest prediction accuracy (95.59) in comparison to the traditional classification algorithms, artificial neural network, and deep neural network.
- Score: 10.807954952981301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tobacco origin identification is significantly important in tobacco industry.
Modeling analysis for sensor data with near infrared spectroscopy has become a
popular method for rapid detection of internal features. However, for sensor
data analysis using traditional artificial neural network or deep network
models, the training process is extremely time-consuming. In this paper, a
novel broad learning system with Takagi-Sugeno (TS) fuzzy subsystem is proposed
for rapid identification of tobacco origin. Incremental learning is employed in
the proposed method, which obtains the weight matrix of the network after a
very small amount of computation, resulting in much shorter training time for
the model, with only about 3 seconds for the extra step training. The
experimental results show that the TS fuzzy subsystem can extract features from
the near infrared data and effectively improve the recognition performance. The
proposed method can achieve the highest prediction accuracy (95.59 %) in
comparison to the traditional classification algorithms, artificial neural
network, and deep convolutional neural network, and has a great advantage in
the training time with only about 128 seconds.
Related papers
- Learning Rate Optimization for Deep Neural Networks Using Lipschitz Bandits [9.361762652324968]
A properly tuned learning rate leads to faster training and higher test accuracy.
We propose a Lipschitz bandit-driven approach for tuning the learning rate of neural networks.
arXiv Detail & Related papers (2024-09-15T16:21:55Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Deep Learning for Size and Microscope Feature Extraction and
Classification in Oral Cancer: Enhanced Convolution Neural Network [30.343802446139186]
Overfitting issue has been the reason behind deep learning technology not being successfully implemented in oral cancer images classification.
The proposed system consists of Enhanced Convolutional Neural Network that uses an autoencoder technique to increase the efficiency of the feature extraction process.
arXiv Detail & Related papers (2022-08-06T08:26:45Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - A Ternary Bi-Directional LSTM Classification for Brain Activation
Pattern Recognition Using fNIRS [0.15229257192293197]
Functional near-infrared spectroscopy (fNIRS) is a non-invasive, low-cost method used to study the brain's blood flow pattern.
The proposed system uses a Bi-Directional LSTM based deep learning architecture for task classification.
arXiv Detail & Related papers (2021-01-14T22:21:15Z) - A Deep Learning Based Ternary Task Classification System Using Gramian
Angular Summation Field in fNIRS Neuroimaging Data [0.15229257192293197]
Functional near-infrared spectroscopy (fNIRS) is a non-invasive, economical method used to study its blood flow pattern.
The proposed method converts the raw fNIRS time series data into an image using Gramian Angular Summation Field.
A Deep Convolutional Neural Network (CNN) based architecture is then used for task classification, including mental arithmetic, motor imagery, and idle state.
arXiv Detail & Related papers (2021-01-14T22:09:35Z) - Fast accuracy estimation of deep learning based multi-class musical
source separation [79.10962538141445]
We propose a method to evaluate the separability of instruments in any dataset without training and tuning a neural network.
Based on the oracle principle with an ideal ratio mask, our approach is an excellent proxy to estimate the separation performances of state-of-the-art deep learning approaches.
arXiv Detail & Related papers (2020-10-19T13:05:08Z) - Hyperspectral Images Classification Based on Multi-scale Residual
Network [5.166817530813299]
Hyperspectral remote sensing images contain a lot of redundant information and the data structure is non-linear.
Deep convolutional neural network has high accuracy, but when a small amount of data is used for training, the classification accuracy of deep learning methods is greatly reduced.
In order to solve the problem of low classification accuracy of existing algorithms on small samples of hyperspectral images, a multi-scale residual network is proposed.
arXiv Detail & Related papers (2020-04-26T13:46:52Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.