Learnable Faster Kernel-PCA for Nonlinear Fault Detection: Deep
Autoencoder-Based Realization
- URL: http://arxiv.org/abs/2112.04193v1
- Date: Wed, 8 Dec 2021 09:41:46 GMT
- Title: Learnable Faster Kernel-PCA for Nonlinear Fault Detection: Deep
Autoencoder-Based Realization
- Authors: Zelin Ren, Xuebing Yang, Yuchen Jiang, Wensheng Zhang
- Abstract summary: Kernel principal component analysis (KPCA) is a well-recognized nonlinear dimensionality reduction method.
In this work, a learnable faster realization of the conventional KPCA is proposed.
The proposed DAE-PCA method is proved to be equivalent to KPCA but has more advantage in terms of automatic searching of the most suitable nonlinear high-dimensional space according to the inputs.
- Score: 7.057302509355857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel principal component analysis (KPCA) is a well-recognized nonlinear
dimensionality reduction method that has been widely used in nonlinear fault
detection tasks. As a kernel trick-based method, KPCA inherits two major
problems. First, the form and the parameters of the kernel function are usually
selected blindly, depending seriously on trial-and-error. As a result, there
may be serious performance degradation in case of inappropriate selections.
Second, at the online monitoring stage, KPCA has much computational burden and
poor real-time performance, because the kernel method requires to leverage all
the offline training data. In this work, to deal with the two drawbacks, a
learnable faster realization of the conventional KPCA is proposed. The core
idea is to parameterize all feasible kernel functions using the novel nonlinear
DAE-FE (deep autoencoder based feature extraction) framework and propose
DAE-PCA (deep autoencoder based principal component analysis) approach in
detail. The proposed DAE-PCA method is proved to be equivalent to KPCA but has
more advantage in terms of automatic searching of the most suitable nonlinear
high-dimensional space according to the inputs. Furthermore, the online
computational efficiency improves by approximately 100 times compared with the
conventional KPCA. With the Tennessee Eastman (TE) process benchmark, the
effectiveness and superiority of the proposed method is illustrated.
Related papers
- Efficient Two-Stage Gaussian Process Regression Via Automatic Kernel Search and Subsampling [5.584863079768593]
We introduce a flexible two-stage GPR framework that separates mean prediction and uncertainty quantification (UQ) to prevent mean misspecification.
We also propose a kernel function misspecification algorithm, supported by theoretical analysis, that selects the optimal kernel from a candidate set.
With much lower computational cost, our subsampling-based strategy can yield competitive or better performance than training exclusively on the full dataset.
arXiv Detail & Related papers (2024-05-22T16:11:29Z) - Efficient kernel surrogates for neural network-based regression [0.8030359871216615]
We study the performance of the Conjugate Kernel (CK), an efficient approximation to the Neural Tangent Kernel (NTK)
We show that the CK performance is only marginally worse than that of the NTK and, in certain cases, is shown to be superior.
In addition to providing a theoretical grounding for using CKs instead of NTKs, our framework suggests a recipe for improving DNN accuracy inexpensively.
arXiv Detail & Related papers (2023-10-28T06:41:47Z) - Extending Kernel PCA through Dualization: Sparsity, Robustness and Fast
Algorithms [14.964073009670194]
This paper revisits Kernel Principal Component Analysis (KPCA) through dualization of a difference of convex functions.
This allows to naturally extend KPCA to multiple objective functions and leads to efficient gradient-based algorithms avoiding the expensive SVD of the Gram matrix.
arXiv Detail & Related papers (2023-06-09T11:27:35Z) - An online algorithm for contrastive Principal Component Analysis [9.090031210111919]
We derive an online algorithm for cPCA* and show that it maps onto a neural network with local learning rules, so it can potentially be implemented in energy efficient neuromorphic hardware.
We evaluate the performance of our online algorithm on real datasets and highlight the differences and similarities with the original formulation.
arXiv Detail & Related papers (2022-11-14T19:48:48Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - CATRO: Channel Pruning via Class-Aware Trace Ratio Optimization [61.71504948770445]
We propose a novel channel pruning method via Class-Aware Trace Ratio Optimization (CATRO) to reduce the computational burden and accelerate the model inference.
We show that CATRO achieves higher accuracy with similar cost or lower cost with similar accuracy than other state-of-the-art channel pruning algorithms.
Because of its class-aware property, CATRO is suitable to prune efficient networks adaptively for various classification subtasks, enhancing handy deployment and usage of deep networks in real-world applications.
arXiv Detail & Related papers (2021-10-21T06:26:31Z) - Kernel Identification Through Transformers [54.3795894579111]
Kernel selection plays a central role in determining the performance of Gaussian Process (GP) models.
This work addresses the challenge of constructing custom kernel functions for high-dimensional GP regression models.
We introduce a novel approach named KITT: Kernel Identification Through Transformers.
arXiv Detail & Related papers (2021-06-15T14:32:38Z) - Statistical Optimality and Computational Efficiency of Nystr\"om Kernel
PCA [0.913755431537592]
We study the trade-off between computational complexity and statistical accuracy in Nystr"om approximate kernel principal component analysis (KPCA)
We show that Nystr"om approximate KPCA outperforms the statistical behavior of another popular approximation scheme, the random feature approximation, when applied to KPCA.
arXiv Detail & Related papers (2021-05-19T01:49:35Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.