Semi-orthogonal Embedding for Efficient Unsupervised Anomaly
Segmentation
- URL: http://arxiv.org/abs/2105.14737v1
- Date: Mon, 31 May 2021 07:02:20 GMT
- Title: Semi-orthogonal Embedding for Efficient Unsupervised Anomaly
Segmentation
- Authors: Jin-Hwa Kim, Do-Hyeong Kim, Saehoon Yi, Taehoon Lee
- Abstract summary: We generalize an ad-hoc method, random feature selection, into semi-orthogonal embedding for robust approximation.
With the scrutiny of ablation studies, the proposed method achieves a new state-of-the-art with significant margins for the MVTec AD, KolektorSDD, KolektorSDD2, and mSTC datasets.
- Score: 6.135577623169028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the efficiency of semi-orthogonal embedding for unsupervised
anomaly segmentation. The multi-scale features from pre-trained CNNs are
recently used for the localized Mahalanobis distances with significant
performance. However, the increased feature size is problematic to scale up to
the bigger CNNs, since it requires the batch-inverse of multi-dimensional
covariance tensor. Here, we generalize an ad-hoc method, random feature
selection, into semi-orthogonal embedding for robust approximation, cubically
reducing the computational cost for the inverse of multi-dimensional covariance
tensor. With the scrutiny of ablation studies, the proposed method achieves a
new state-of-the-art with significant margins for the MVTec AD, KolektorSDD,
KolektorSDD2, and mSTC datasets. The theoretical and empirical analyses offer
insights and verification of our straightforward yet cost-effective approach.
Related papers
- Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature Selection [8.391109286933856]
We develop two Sparse Principal Component Analysis (STPCA) models that utilize the projection directions in the factor matrices to perform unsupervised feature selection.
For both models, we prove the optimal solution of each subproblem falls onto the Hermitian Positive Semidefinite Cone (HPSD)
According to the experimental results, the two proposed methods are suitable for handling different data tensor scenarios and outperform the state-of-the-art UFS methods.
arXiv Detail & Related papers (2024-07-24T04:04:56Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Exploiting Structure for Optimal Multi-Agent Bayesian Decentralized
Estimation [4.320393382724066]
Key challenge in Bayesian decentralized data fusion is the rumor propagation' or double counting' phenomenon.
We show that by exploiting the probabilistic independence structure in multi-agent decentralized fusion problems a tighter bound can be found.
We then test our new non-monolithic CI algorithm on a large-scale target tracking simulation and show that it achieves a tighter bound and a more accurate estimate.
arXiv Detail & Related papers (2023-07-20T05:16:33Z) - Multi-View Clustering via Semi-non-negative Tensor Factorization [120.87318230985653]
We develop a novel multi-view clustering based on semi-non-negative tensor factorization (Semi-NTF)
Our model directly considers the between-view relationship and exploits the between-view complementary information.
In addition, we provide an optimization algorithm for the proposed method and prove mathematically that the algorithm always converges to the stationary KKT point.
arXiv Detail & Related papers (2023-03-29T14:54:19Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - ER: Equivariance Regularizer for Knowledge Graph Completion [107.51609402963072]
We propose a new regularizer, namely, Equivariance Regularizer (ER)
ER can enhance the generalization ability of the model by employing the semantic equivariance between the head and tail entities.
The experimental results indicate a clear and substantial improvement over the state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-24T08:18:05Z) - Machine Learning and Variational Algorithms for Lattice Field Theory [1.198562319289569]
In lattice quantum field theory studies, parameters defining the lattice theory must be tuned toward criticality to access continuum physics.
We introduce an approach to "deform" Monte Carlo estimators based on contour deformations applied to the domain of the path integral.
We demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications.
arXiv Detail & Related papers (2021-06-03T16:37:05Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - The Heavy-Tail Phenomenon in SGD [7.366405857677226]
We show that depending on the structure of the Hessian of the loss at the minimum, the SGD iterates will converge to a emphheavy-tailed stationary distribution.
We translate our results into insights about the behavior of SGD in deep learning.
arXiv Detail & Related papers (2020-06-08T16:43:56Z) - Efficient Structure-preserving Support Tensor Train Machine [0.0]
Train Multi-way Multi-level Kernel (TT-MMK)
We develop the Train Multi-way Multi-level Kernel (TT-MMK), which combines the simplicity of the Polyadic decomposition, the classification power of the Dual Structure-preserving Support Machine, and the reliability of the Train Vector approximation.
We show by experiments that the TT-MMK method is usually more reliable, less sensitive to tuning parameters, and gives higher prediction accuracy in the SVM classification when benchmarked against other state-of-the-art techniques.
arXiv Detail & Related papers (2020-02-12T16:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.