Efficient Nonlinear RX Anomaly Detectors
- URL: http://arxiv.org/abs/2012.05799v1
- Date: Mon, 7 Dec 2020 21:57:54 GMT
- Title: Efficient Nonlinear RX Anomaly Detectors
- Authors: Jos\'e A. Padr\'on Hidalgo, Adri\'an P\'erez-Suay, Fatih Nar, and
Gustau Camps-Valls
- Abstract summary: We propose two families of techniques to improve the efficiency of the standard kernel Reed-Xiaoli (RX) method for anomaly detection.
We show that the proposed efficient methods have a lower computational cost and they perform similar (or outperform) the standard kernel RX algorithm.
- Score: 7.762712532657168
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current anomaly detection algorithms are typically challenged by either
accuracy or efficiency. More accurate nonlinear detectors are typically slow
and not scalable. In this letter, we propose two families of techniques to
improve the efficiency of the standard kernel Reed-Xiaoli (RX) method for
anomaly detection by approximating the kernel function with either {\em
data-independent} random Fourier features or {\em data-dependent} basis with
the Nystr\"om approach. We compare all methods for both real multi- and
hyperspectral images. We show that the proposed efficient methods have a lower
computational cost and they perform similar (or outperform) the standard kernel
RX algorithm thanks to their implicit regularization effect. Last but not
least, the Nystr\"om approach has an improved power of detection.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Towards Large Certified Radius in Randomized Smoothing using
Quasiconcave Optimization [3.5133481941064164]
In this work, we show that by exploiting a quasi fixed problem structure, we can find the optimal certified radii for most data points with slight computational overhead.
This leads to an efficient and effective input-specific randomized smoothing algorithm.
arXiv Detail & Related papers (2023-02-01T03:25:43Z) - RFFNet: Large-Scale Interpretable Kernel Methods via Random Fourier Features [3.0079490585515347]
We introduce RFFNet, a scalable method that learns the kernel relevances' on the fly via first-order optimization.
We show that our approach has a small memory footprint and run-time, low prediction error, and effectively identifies relevant features.
We supply users with an efficient, PyTorch-based library, that adheres to the scikit-learn standard API and code for fully reproducing our results.
arXiv Detail & Related papers (2022-11-11T18:50:34Z) - A Robust and Explainable Data-Driven Anomaly Detection Approach For
Power Electronics [56.86150790999639]
We present two anomaly detection and classification approaches, namely the Matrix Profile algorithm and anomaly transformer.
The Matrix Profile algorithm is shown to be well suited as a generalizable approach for detecting real-time anomalies in streaming time-series data.
A series of custom filters is created and added to the detector to tune its sensitivity, recall, and detection accuracy.
arXiv Detail & Related papers (2022-09-23T06:09:35Z) - Linear Time Kernel Matrix Approximation via Hyperspherical Harmonics [3.24890820102255]
We propose a new technique for constructing low-rank approximations of matrices that arise in kernel methods for machine learning.
Our approach pairs a novel automatically constructed analytic expansion of the underlying kernel function with a data-dependent compression step to further optimize the approximation.
Experimental results show our approach compares favorably to the commonly used Nystrom method with respect to both accuracy for a given rank and computational time for a given accuracy across a variety of kernels, dimensions, and datasets.
arXiv Detail & Related papers (2022-02-08T05:19:39Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Randomized RX for target detection [8.480205772461927]
This work tackles the target detection problem through the well-known global RX method.
We propose random Fourier features to approximate the Gaussian kernel in kernel RX.
Results over both synthetic and real-world image target detection problems show space and time efficiency of the proposed method.
arXiv Detail & Related papers (2020-12-08T19:18:49Z) - Kernel k-Means, By All Means: Algorithms and Strong Consistency [21.013169939337583]
Kernel $k$ clustering is a powerful tool for unsupervised learning of non-linear data.
In this paper, we generalize results leveraging a general family of means to combat sub-optimal local solutions.
Our algorithm makes use of majorization-minimization (MM) to better solve this non-linear separation problem.
arXiv Detail & Related papers (2020-11-12T16:07:18Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.