Assessing the Performance of 1D-Convolution Neural Networks to Predict
Concentration of Mixture Components from Raman Spectra
- URL: http://arxiv.org/abs/2306.16621v1
- Date: Thu, 29 Jun 2023 01:41:07 GMT
- Title: Assessing the Performance of 1D-Convolution Neural Networks to Predict
Concentration of Mixture Components from Raman Spectra
- Authors: Dexter Antonio, Hannah O'Toole, Randy Carney, Ambarish Kulkarni, Ahmet
Palazoglu
- Abstract summary: An emerging application of Raman spectroscopy is monitoring the state of chemical reactors during biologic drug production.
Chemometric algorithms are used to interpret Raman spectra produced from complex mixtures of bioreactor contents as a reaction evolves.
Finding the optimal algorithm for a specific bioreactor environment is challenging due to the lack of freely available Raman mixture datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An emerging application of Raman spectroscopy is monitoring the state of
chemical reactors during biologic drug production. Raman shift intensities
scale linearly with the concentrations of chemical species and thus can be used
to analytically determine real-time concentrations using non-destructive light
irradiation in a label-free manner. Chemometric algorithms are used to
interpret Raman spectra produced from complex mixtures of bioreactor contents
as a reaction evolves. Finding the optimal algorithm for a specific bioreactor
environment is challenging due to the lack of freely available Raman mixture
datasets. The RaMix Python package addresses this challenge by enabling the
generation of synthetic Raman mixture datasets with controllable noise levels
to assess the utility of different chemometric algorithm types for real-time
monitoring applications. To demonstrate the capabilities of this package and
compare the performance of different chemometric algorithms, 48 datasets of
simulated spectra were generated using the RaMix Python package. The four
tested algorithms include partial least squares regression (PLS), a simple
neural network, a simple convolutional neural network (simple CNN), and a 1D
convolutional neural network with a ResNet architecture (ResNet). The
performance of the PLS and simple CNN model was found to be comparable, with
the PLS algorithm slightly outperforming the other models on 83\% of the data
sets. The simple CNN model outperforms the other models on large, high noise
datasets, demonstrating the superior capability of convolutional neural
networks compared to PLS in analyzing noisy spectra. These results demonstrate
the promise of CNNs to automatically extract concentration information from
unprocessed, noisy spectra, allowing for better process control of industrial
drug production. Code for this project is available at
github.com/DexterAntonio/RaMix.
Related papers
- Hyperspectral Image Classification Based on Faster Residual Multi-branch Spiking Neural Network [6.166929138912052]
This paper builds a spiking neural network (SNN) based on the leaky integrate-and-fire (LIF) neuron model for HSI classification tasks.
SNN-SWMR requires a time step reduction of about 84%, training time, and testing time reduction of about 63% and 70% at the same accuracy.
arXiv Detail & Related papers (2024-09-18T00:51:01Z) - Data Augmentation Scheme for Raman Spectra with Highly Correlated
Annotations [0.23090185577016453]
We exploit the additive nature of spectra in order to generate additional data points from a given dataset that have statistically independent labels.
We show that training a CNN on these generated data points improves the performance on datasets where the annotations do not bear the same correlation as the dataset that was used for model training.
arXiv Detail & Related papers (2024-02-01T18:46:28Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Parameter estimation for WMTI-Watson model of white matter using
encoder-decoder recurrent neural network [0.0]
In this study, we evaluate the performance of NLLS, the RNN-based method and a multilayer perceptron (MLP) on datasets rat and human brain.
We showed that the proposed RNN-based fitting approach had the advantage of highly reduced computation time over NLLS.
arXiv Detail & Related papers (2022-03-01T16:33:15Z) - Simpler is better: spectral regularization and up-sampling techniques
for variational autoencoders [1.2234742322758418]
characterization of the spectral behavior of generative models based on neural networks remains an open issue.
Recent research has focused heavily on generative adversarial networks and the high-frequency discrepancies between real and generated images.
We propose a simple 2D Fourier transform-based spectral regularization loss for the Variational Autoencoders (VAEs)
arXiv Detail & Related papers (2022-01-19T11:49:57Z) - Classification of diffraction patterns using a convolutional neural
network in single particle imaging experiments performed at X-ray
free-electron lasers [53.65540150901678]
Single particle imaging (SPI) at X-ray free electron lasers (XFELs) is particularly well suited to determine the 3D structure of particles in their native environment.
For a successful reconstruction, diffraction patterns originating from a single hit must be isolated from a large number of acquired patterns.
We propose to formulate this task as an image classification problem and solve it using convolutional neural network (CNN) architectures.
arXiv Detail & Related papers (2021-12-16T17:03:14Z) - Spectral Complexity-scaled Generalization Bound of Complex-valued Neural
Networks [78.64167379726163]
This paper is the first work that proves a generalization bound for the complex-valued neural network.
We conduct experiments by training complex-valued convolutional neural networks on different datasets.
arXiv Detail & Related papers (2021-12-07T03:25:25Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z) - Identification of complex mixtures for Raman spectroscopy using a novel
scheme based on a new multi-label deep neural network [0.0]
We propose a new scheme based on a constant wavelet transform (CWT) and a deep network for classifying complex mixture.
A multi-label deep neural network model (MDNN) is then applied for classifying material.
The average detection time obtained from our model is 5.31 s, which is much faster than the detection time of the previously proposed models.
arXiv Detail & Related papers (2020-10-29T14:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.