Subspace Modeling for Fast Out-Of-Distribution and Anomaly Detection
- URL: http://arxiv.org/abs/2203.10422v1
- Date: Sun, 20 Mar 2022 00:55:20 GMT
- Title: Subspace Modeling for Fast Out-Of-Distribution and Anomaly Detection
- Authors: Ibrahima J. Ndiour, Nilesh A. Ahuja, Omesh Tickoo
- Abstract summary: This paper presents a principled approach for detecting anomalous and out-of-distribution (OOD) samples in deep neural networks (DNN)
We propose the application of linear statistical dimensionality reduction techniques on the semantic features produced by a DNN.
We show that the "feature reconstruction error" (FRE), which is the $ell$-norm of the difference between the original feature in the high-dimensional space and the pre-image of its low-dimensional reduced embedding, is highly effective for OOD and anomaly detection.
- Score: 5.672132510411465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a fast, principled approach for detecting anomalous and
out-of-distribution (OOD) samples in deep neural networks (DNN). We propose the
application of linear statistical dimensionality reduction techniques on the
semantic features produced by a DNN, in order to capture the low-dimensional
subspace truly spanned by said features. We show that the "feature
reconstruction error" (FRE), which is the $\ell_2$-norm of the difference
between the original feature in the high-dimensional space and the pre-image of
its low-dimensional reduced embedding, is highly effective for OOD and anomaly
detection. To generalize to intermediate features produced at any given layer,
we extend the methodology by applying nonlinear kernel-based methods.
Experiments using standard image datasets and DNN architectures demonstrate
that our method meets or exceeds best-in-class quality performance, but at a
fraction of the computational and memory cost required by the state of the art.
It can be trained and run very efficiently, even on a traditional CPU.
Related papers
- Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression [0.0]
This paper introduces a dynamic, error-bounded hierarchical matrix (H-matrix) compression method tailored for Physics-Informed Neural Networks (PINNs)
The proposed approach reduces the computational complexity and memory demands of large-scale physics-based models while preserving the essential properties of the Neural Tangent Kernel (NTK)
Empirical results demonstrate that this technique outperforms traditional compression methods, such as Singular Value Decomposition (SVD), pruning, and quantization, by maintaining high accuracy and improving generalization capabilities.
arXiv Detail & Related papers (2024-09-11T05:55:51Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Low-rank Tensor Assisted K-space Generative Model for Parallel Imaging
Reconstruction [14.438899814473446]
We present a new idea, low-rank tensor assisted k-space generative model (LR-KGM) for parallel imaging reconstruction.
This means that we transform original prior information into high-dimensional prior information for learning.
Experimental comparisons with the state-of-the-arts demonstrated that the proposed LR-KGM method achieved better performance.
arXiv Detail & Related papers (2022-12-11T13:34:43Z) - FRE: A Fast Method For Anomaly Detection And Segmentation [5.0468312081378475]
This paper presents a principled approach for solving the visual anomaly detection and segmentation problem.
We propose the application of linear statistical dimensionality reduction techniques on the intermediate features produced by a pretrained DNN on the training data.
We show that the emphfeature reconstruction error (FRE), which is the $ell$-norm of the difference between the original feature in the high-dimensional space and the pre-image of its low-dimensional reduced embedding, is extremely effective for anomaly detection.
arXiv Detail & Related papers (2022-11-23T01:03:20Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - Out-Of-Distribution Detection With Subspace Techniques And Probabilistic
Modeling Of Features [7.219077740523682]
This paper presents a principled approach for detecting out-of-distribution (OOD) samples in deep neural networks (DNN)
Modeling probability distributions on deep features has recently emerged as an effective, yet computationally cheap method to detect OOD samples in DNN.
We apply linear statistical dimensionality reduction techniques and nonlinear manifold-learning techniques on the high-dimensional features in order to capture the true subspace spanned by the features.
arXiv Detail & Related papers (2020-12-08T07:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.