Essential Number of Principal Components and Nearly Training-Free Model
for Spectral Analysis
- URL: http://arxiv.org/abs/2212.14623v1
- Date: Fri, 30 Dec 2022 10:19:10 GMT
- Title: Essential Number of Principal Components and Nearly Training-Free Model
for Spectral Analysis
- Authors: Yifeng Bie and Shuai You and Xinrui Li and Xuekui Zhang and Tao Lu
- Abstract summary: We show that the number of functional or non-functional principal components required to retain the essential information is the same as the number of independent constituents in the mixture set.
Due to the mutual in-dependency among different gas molecules, near one-to-one projection from the principal component to the mixture constituent can be established.
- Score: 5.374884291606767
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Through a study of multi-gas mixture datasets, we show that in
multi-component spectral analysis, the number of functional or non-functional
principal components required to retain the essential information is the same
as the number of independent constituents in the mixture set. Due to the mutual
in-dependency among different gas molecules, near one-to-one projection from
the principal component to the mixture constituent can be established, leading
to a significant simplification of spectral quantification. Further, with the
knowledge of the molar extinction coefficients of each constituent, a complete
principal component set can be extracted from the coefficients directly, and
few to none training samples are required for the learning model. Compared to
other approaches, the proposed methods provide fast and accurate spectral
quantification solutions with a small memory size needed.
Related papers
- Automated Mixture Analysis via Structural Evaluation [0.0]
In this paper, we combine machine-learning molecular embedding methods with a graph-based ranking system to determine the likelihood of a molecule being present in a mixture.
We demonstrate that the mixture components can be identified with extremely high accuracy (>97%) in an efficient manner.
arXiv Detail & Related papers (2024-08-28T14:32:24Z) - On the estimation of the number of components in multivariate functional principal component analysis [0.0]
We present extensive simulations to investigate choosing the number of principal components to retain.
We show empirically that the conventional approach of using a percentage of variance explained threshold for each univariate functional feature may be unreliable.
arXiv Detail & Related papers (2023-11-08T09:05:42Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Unsupervised Machine Learning for Exploratory Data Analysis of Exoplanet
Transmission Spectra [68.8204255655161]
We focus on unsupervised techniques for analyzing spectral data from transiting exoplanets.
We show that there is a high degree of correlation in the spectral data, which calls for appropriate low-dimensional representations.
We uncover interesting structures in the principal component basis, namely, well-defined branches corresponding to different chemical regimes.
arXiv Detail & Related papers (2022-01-07T22:26:33Z) - Gaussian Process Regression for Absorption Spectra Analysis of Molecular
Dimers [68.8204255655161]
We discuss an approach based on a machine learning technique, where the parameters for the numerical calculations are chosen from Gaussian Process Regression (GPR)
This approach does not only quickly converge to an optimal parameter set, but in addition provides information about the complete parameter space.
We find that indeed the GPR gives reliable results which are in agreement with direct calculations of these parameters using quantum chemical methods.
arXiv Detail & Related papers (2021-12-14T17:46:45Z) - Boosting Independent Component Analysis [5.770800671793959]
We present a novel boosting-based algorithm for independent component analysis.
Our algorithm fills the gap in the nonparametric independent component analysis by introducing boosting to maximum likelihood estimation.
arXiv Detail & Related papers (2021-12-12T14:53:42Z) - Shared Independent Component Analysis for Multi-Subject Neuroimaging [107.29179765643042]
We introduce Shared Independent Component Analysis (ShICA) that models each view as a linear transform of shared independent components contaminated by additive Gaussian noise.
We show that this model is identifiable if the components are either non-Gaussian or have enough diversity in noise variances.
We provide empirical evidence on fMRI and MEG datasets that ShICA yields more accurate estimation of the components than alternatives.
arXiv Detail & Related papers (2021-10-26T08:54:41Z) - Probabilistic Simplex Component Analysis [66.30587591100566]
PRISM is a probabilistic simplex component analysis approach to identifying the vertices of a data-circumscribing simplex from data.
The problem has a rich variety of applications, the most notable being hyperspectral unmixing in remote sensing and non-negative matrix factorization in machine learning.
arXiv Detail & Related papers (2021-03-18T05:39:00Z) - Sequential optical response suppression for chemical mixture
characterization [0.0]
We introduce an approach based on quantum tracking control that allows for determining the relative concentrations of constituents in a quantum mixture.
We consider two very distinct model systems: mixtures of diatomic molecules in the gas phase, as well as solid-state materials composed of a mixture of components.
arXiv Detail & Related papers (2020-10-26T19:24:00Z) - Consistent Estimation of Identifiable Nonparametric Mixture Models from
Grouped Observations [84.81435917024983]
This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations.
A practical implementation is provided for paired observations, and the approach is shown to outperform existing methods.
arXiv Detail & Related papers (2020-06-12T20:44:22Z) - Constrained Nonnegative Matrix Factorization for Blind Hyperspectral
Unmixing incorporating Endmember Independence [0.0]
This paper presents a novel blind HU algorithm, referred to as Kurtosis-based Smooth Nonnegative Matrix Factorization (KbSNMF)
It incorporates a novel constraint based on the statistical independence of the probability density functions of endmember spectra.
It exhibits superior performance especially in terms of extracting endmember spectra from hyperspectral data.
arXiv Detail & Related papers (2020-03-02T17:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.