Second-Order Component Analysis for Fault Detection
- URL: http://arxiv.org/abs/2103.07303v1
- Date: Fri, 12 Mar 2021 14:25:37 GMT
- Title: Second-Order Component Analysis for Fault Detection
- Authors: Peng Jingchao, Zhao Haitao, Hu Zhengwei
- Abstract summary: High-order neural networks might bring the risk of overfitting and learning both the key information from original data and noises or anomalies.
This paper proposes a novel fault detection method called second-order component analysis (SCA)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Process monitoring based on neural networks is getting more and more
attention. Compared with classical neural networks, high-order neural networks
have natural advantages in dealing with heteroscedastic data. However,
high-order neural networks might bring the risk of overfitting and learning
both the key information from original data and noises or anomalies. Orthogonal
constraints can greatly reduce correlations between extracted features, thereby
reducing the overfitting risk. This paper proposes a novel fault detection
method called second-order component analysis (SCA). SCA rules out the
heteroscedasticity of pro-cess data by optimizing a second-order autoencoder
with orthogonal constraints. In order to deal with this constrained
optimization problem, a geometric conjugate gradient algorithm is adopted in
this paper, which performs geometric optimization on the combination of Stiefel
manifold and Euclidean manifold. Extensive experiments on the Tennessee-Eastman
benchmark pro-cess show that SCA outperforms PCA, KPCA, and autoencoder in
missed detection rate (MDR) and false alarm rate (FAR).
Related papers
- Efficient Second-Order Neural Network Optimization via Adaptive Trust Region Methods [0.0]
SecondOrderAdaptive (SOAA) is a novel optimization algorithm designed to overcome limitations of traditional second-order techniques.
We empirically demonstrate that SOAA achieves faster and more stable convergence compared to first-order approximations.
arXiv Detail & Related papers (2024-10-03T08:23:06Z) - Compositional Curvature Bounds for Deep Neural Networks [7.373617024876726]
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.
We study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations.
We introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks.
arXiv Detail & Related papers (2024-06-07T17:50:15Z) - On Excess Risk Convergence Rates of Neural Network Classifiers [8.329456268842227]
We study the performance of plug-in classifiers based on neural networks in a binary classification setting as measured by their excess risks.
We analyze the estimation and approximation properties of neural networks to obtain a dimension-free, uniform rate of convergence.
arXiv Detail & Related papers (2023-09-26T17:14:10Z) - Neural Fast Full-Rank Spatial Covariance Analysis for Blind Source
Separation [26.6020148790775]
This paper describes an efficient unsupervised learning method for a neural source separation model.
We propose neural FastFCA based on the jointly-diagonalizable yet full-rank spatial model.
Experiment using mixture signals of two to four sound sources shows that neural FastFCA outperforms conventional BSS methods.
arXiv Detail & Related papers (2023-06-17T02:50:17Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.