Ellipsoidal Subspace Support Vector Data Description
- URL: http://arxiv.org/abs/2003.09504v1
- Date: Fri, 20 Mar 2020 21:31:03 GMT
- Title: Ellipsoidal Subspace Support Vector Data Description
- Authors: Fahad Sohrab, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj
- Abstract summary: We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
- Score: 98.67884574313292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel method for transforming data into a
low-dimensional space optimized for one-class classification. The proposed
method iteratively transforms data into a new subspace optimized for
ellipsoidal encapsulation of target class data. We provide both linear and
non-linear formulations for the proposed method. The method takes into account
the covariance of the data in the subspace; hence, it yields a more generalized
solution as compared to Subspace Support Vector Data Description for a
hypersphere. We propose different regularization terms expressing the class
variance in the projected space. We compare the results with classic and
recently proposed one-class classification methods and achieve better results
in the majority of cases. The proposed method is also noticed to converge much
faster than recently proposed Subspace Support Vector Data Description.
Related papers
- Newton Method-based Subspace Support Vector Data Description [16.772385337198834]
We present an adaptation of Newton's method for the optimization of Subspace Support Vector Data Description (S-SVDD)
We leverage Newton's method to enhance data mapping and data description for an improved optimization of subspace learning-based one-class classification.
The paper discusses the limitations of gradient descent and the advantages of using Newton's method in subspace learning for one-class classification tasks.
arXiv Detail & Related papers (2023-09-25T08:49:41Z) - Optimal Projections for Discriminative Dictionary Learning using the JL-lemma [0.5461938536945723]
Dimensionality reduction-based dictionary learning methods have often used iterative random projections.
This paper proposes a constructive approach to derandomize the projection matrix using the Johnson-Lindenstrauss lemma.
arXiv Detail & Related papers (2023-08-27T02:59:59Z) - Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse
Problems [64.29491112653905]
We propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods.
Specifically, we prove that if tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG with the denoised data ensures the data consistency update to remain in the tangent space.
Our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method.
arXiv Detail & Related papers (2023-03-10T07:42:49Z) - Graph-Embedded Subspace Support Vector Data Description [98.78559179013295]
We propose a novel subspace learning framework for one-class classification.
The proposed framework presents the problem in the form of graph embedding.
We demonstrate improved performance against the baselines and the recently proposed subspace learning methods for one-class classification.
arXiv Detail & Related papers (2021-04-29T14:30:48Z) - Kernel Two-Dimensional Ridge Regression for Subspace Clustering [45.651770340521786]
We propose a novel subspace clustering method for 2D data.
It directly uses 2D data as inputs such that the learning of representations benefits from inherent structures and relationships of the data.
arXiv Detail & Related papers (2020-11-03T04:52:46Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z) - Intrinsic Dimension Estimation via Nearest Constrained Subspace
Classifier [7.028302194243312]
A new subspace based classifier is proposed for supervised classification or intrinsic dimension estimation.
The distribution of the data in each class is modeled by a union of a finite number ofaffine subspaces of the feature space.
The proposed method is a generalisation of classical NN (Nearest Neighbor), NFL (Nearest Feature Line) and has a close relationship to NS (Nearest Subspace)
The proposed classifier with an accurately estimated dimension parameter generally outperforms its competitors in terms of classification accuracy.
arXiv Detail & Related papers (2020-02-08T20:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.