Self-learning sparse PCA for multimode process monitoring
- URL: http://arxiv.org/abs/2108.03449v1
- Date: Sat, 7 Aug 2021 13:50:16 GMT
- Title: Self-learning sparse PCA for multimode process monitoring
- Authors: Jingxin Zhang, Donghua Zhou, Maoyin Chen
- Abstract summary: This paper proposes a novel sparse principal component analysis algorithm with self-learning ability for successive modes.
Different from traditional multimode monitoring methods, the monitoring model is updated based on the current model and new data when a new mode arrives.
- Score: 2.8102838347038617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a novel sparse principal component analysis algorithm
with self-learning ability for successive modes, where synaptic intelligence is
employed to measure the importance of variables and a regularization term is
added to preserve the learned knowledge of previous modes. Different from
traditional multimode monitoring methods, the monitoring model is updated based
on the current model and new data when a new mode arrives, thus delivering
prominent performance for sequential modes. Besides, the computation and
storage resources are saved in the long run, because it is not necessary to
retrain the model from scratch frequently and store data from previous modes.
More importantly, the model furnishes excellent interpretability owing to the
sparsity of parameters. Finally, a numerical case and a practical pulverizing
system are adopted to illustrate the effectiveness of the proposed algorithm.
Related papers
- Recursive Learning of Asymptotic Variational Objectives [49.69399307452126]
General state-space models (SSMs) are widely used in statistical machine learning and are among the most classical generative models for sequential time-series data.
Online sequential IWAE (OSIWAE) allows for online learning of both model parameters and a Markovian recognition model for inferring latent states.
This approach is more theoretically well-founded than recently proposed online variational SMC methods.
arXiv Detail & Related papers (2024-11-04T16:12:37Z) - Breaking Determinism: Fuzzy Modeling of Sequential Recommendation Using Discrete State Space Diffusion Model [66.91323540178739]
Sequential recommendation (SR) aims to predict items that users may be interested in based on their historical behavior.
We revisit SR from a novel information-theoretic perspective and find that sequential modeling methods fail to adequately capture randomness and unpredictability of user behavior.
Inspired by fuzzy information processing theory, this paper introduces the fuzzy sets of interaction sequences to overcome the limitations and better capture the evolution of users' real interests.
arXiv Detail & Related papers (2024-10-31T14:52:01Z) - Optimization of geological carbon storage operations with multimodal latent dynamic model and deep reinforcement learning [1.8549313085249324]
This study introduces the multimodal latent dynamic (MLD) model, a deep learning framework for fast flow prediction and well control optimization in GCS.
Unlike existing models, the MLD supports diverse input modalities, allowing comprehensive data interactions.
The approach outperforms traditional methods, achieving the highest NPV while reducing computational resources by over 60%.
arXiv Detail & Related papers (2024-06-07T01:30:21Z) - Towards Stable Machine Learning Model Retraining via Slowly Varying Sequences [6.067007470552307]
We propose a methodology for finding sequences of machine learning models that are stable across retraining iterations.
We develop a mixed-integer optimization formulation that is guaranteed to recover optimal models.
Our method shows stronger stability than greedily trained models with a small, controllable sacrifice in predictive power.
arXiv Detail & Related papers (2024-03-28T22:45:38Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Continual learning-based probabilistic slow feature analysis for
multimode dynamic process monitoring [2.9631016562930546]
A novel multimode dynamic process monitoring approach is proposed by extending elastic weight consolidation (EWC) to probabilistic slow feature analysis (PSFA)
EWC was originally introduced in the setting of machine learning of sequential multi-tasks with the aim of avoiding catastrophic forgetting issue.
The effectiveness of the proposed method is demonstrated via a continuous stirred tank heater and a practical coal pulverizing system.
arXiv Detail & Related papers (2022-02-23T03:57:59Z) - Blockwise Sequential Model Learning for Partially Observable
Reinforcement Learning [14.642266310020505]
This paper proposes a new sequential model learning architecture to solve partially observable Markov decision problems.
The proposed architecture generates a latent variable in each data block with a length of multiple timesteps and passes the most relevant information to the next block for policy optimization.
Numerical results show that the proposed method significantly outperforms previous methods in various partially observable environments.
arXiv Detail & Related papers (2021-12-10T05:38:24Z) - Monitoring multimode processes: a modified PCA algorithm with continual
learning ability [2.5004754622137515]
It could be an effective manner to make local monitoring model remember the features of previous modes.
A modified PCA algorithm is built with continual learning ability for monitoring multimode processes.
It is called PCA-EWC, where the significant features of previous modes are preserved when a PCA model is established for the current mode.
arXiv Detail & Related papers (2020-12-13T12:09:38Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.