Intrinsic Dimension Estimation via Nearest Constrained Subspace
Classifier
- URL: http://arxiv.org/abs/2002.03228v1
- Date: Sat, 8 Feb 2020 20:54:42 GMT
- Title: Intrinsic Dimension Estimation via Nearest Constrained Subspace
Classifier
- Authors: Liang Liao and Stephen John Maybank
- Abstract summary: A new subspace based classifier is proposed for supervised classification or intrinsic dimension estimation.
The distribution of the data in each class is modeled by a union of a finite number ofaffine subspaces of the feature space.
The proposed method is a generalisation of classical NN (Nearest Neighbor), NFL (Nearest Feature Line) and has a close relationship to NS (Nearest Subspace)
The proposed classifier with an accurately estimated dimension parameter generally outperforms its competitors in terms of classification accuracy.
- Score: 7.028302194243312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problems of classification and intrinsic dimension estimation
on image data. A new subspace based classifier is proposed for supervised
classification or intrinsic dimension estimation. The distribution of the data
in each class is modeled by a union of of a finite number ofaffine subspaces of
the feature space. The affine subspaces have a common dimension, which is
assumed to be much less than the dimension of the feature space. The subspaces
are found using regression based on the L0-norm. The proposed method is a
generalisation of classical NN (Nearest Neighbor), NFL (Nearest Feature Line)
classifiers and has a close relationship to NS (Nearest Subspace) classifier.
The proposed classifier with an accurately estimated dimension parameter
generally outperforms its competitors in terms of classification accuracy. We
also propose a fast version of the classifier using a neighborhood
representation to reduce its computational complexity. Experiments on publicly
available datasets corroborate these claims.
Related papers
- Relative intrinsic dimensionality is intrinsic to learning [49.5738281105287]
We introduce a new notion of the intrinsic dimension of a data distribution, which precisely captures the separability properties of the data.
For this intrinsic dimension, the rule of thumb above becomes a law: high intrinsic dimension guarantees highly separable data.
We show thisRelative intrinsic dimension provides both upper and lower bounds on the probability of successfully learning and generalising in a binary classification problem.
arXiv Detail & Related papers (2023-10-10T10:41:45Z) - Origins of Low-dimensional Adversarial Perturbations [17.17170592140042]
We study the phenomenon of low-dimensional adversarial perturbations in classification.
The goal is to fool the classifier into flipping its decision on a nonzero fraction of inputs from a designated class.
We compute lowerbounds for the fooling rate of any subspace.
arXiv Detail & Related papers (2022-03-25T17:02:49Z) - Optimality and complexity of classification by random projection [1.5229257192293197]
The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen.
We show that this type of classifier is extremely flexible, as it is likely to approximate an arbitrary precision.
In particular, given full knowledge of the class conditional densities, the error of these low-complexity classifiers would converge to the optimal (Bayes) error as k and n go to infinity.
arXiv Detail & Related papers (2021-08-11T23:14:46Z) - Learning optimally separated class-specific subspace representations
using convolutional autoencoder [0.0]
We propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations.
To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets.
arXiv Detail & Related papers (2021-05-19T00:45:34Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Linear Classifiers in Mixed Constant Curvature Spaces [40.82908295137667]
We address the problem of linear classification in a product space form -- a mix of Euclidean, spherical, and hyperbolic spaces.
We prove that linear classifiers in $d$-dimensional constant curvature spaces can shatter exactly $d+1$ points.
We describe a novel perceptron classification algorithm, and establish rigorous convergence results.
arXiv Detail & Related papers (2021-02-19T23:29:03Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Robust Large-Margin Learning in Hyperbolic Space [64.42251583239347]
We present the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.
We provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.
We prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees.
arXiv Detail & Related papers (2020-04-11T19:11:30Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.