Intrinsic Dimension Estimation via Nearest Constrained Subspace
Classifier
- URL: http://arxiv.org/abs/2002.03228v1
- Date: Sat, 8 Feb 2020 20:54:42 GMT
- Title: Intrinsic Dimension Estimation via Nearest Constrained Subspace
Classifier
- Authors: Liang Liao and Stephen John Maybank
- Abstract summary: A new subspace based classifier is proposed for supervised classification or intrinsic dimension estimation.
The distribution of the data in each class is modeled by a union of a finite number ofaffine subspaces of the feature space.
The proposed method is a generalisation of classical NN (Nearest Neighbor), NFL (Nearest Feature Line) and has a close relationship to NS (Nearest Subspace)
The proposed classifier with an accurately estimated dimension parameter generally outperforms its competitors in terms of classification accuracy.
- Score: 7.028302194243312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problems of classification and intrinsic dimension estimation
on image data. A new subspace based classifier is proposed for supervised
classification or intrinsic dimension estimation. The distribution of the data
in each class is modeled by a union of of a finite number ofaffine subspaces of
the feature space. The affine subspaces have a common dimension, which is
assumed to be much less than the dimension of the feature space. The subspaces
are found using regression based on the L0-norm. The proposed method is a
generalisation of classical NN (Nearest Neighbor), NFL (Nearest Feature Line)
classifiers and has a close relationship to NS (Nearest Subspace) classifier.
The proposed classifier with an accurately estimated dimension parameter
generally outperforms its competitors in terms of classification accuracy. We
also propose a fast version of the classifier using a neighborhood
representation to reduce its computational complexity. Experiments on publicly
available datasets corroborate these claims.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Relative intrinsic dimensionality is intrinsic to learning [49.5738281105287]
We introduce a new notion of the intrinsic dimension of a data distribution, which precisely captures the separability properties of the data.
For this intrinsic dimension, the rule of thumb above becomes a law: high intrinsic dimension guarantees highly separable data.
We show thisRelative intrinsic dimension provides both upper and lower bounds on the probability of successfully learning and generalising in a binary classification problem.
arXiv Detail & Related papers (2023-10-10T10:41:45Z) - Origins of Low-dimensional Adversarial Perturbations [17.17170592140042]
We study the phenomenon of low-dimensional adversarial perturbations in classification.
The goal is to fool the classifier into flipping its decision on a nonzero fraction of inputs from a designated class.
We compute lowerbounds for the fooling rate of any subspace.
arXiv Detail & Related papers (2022-03-25T17:02:49Z) - Approximation and generalization properties of the random projection classification method [0.4604003661048266]
We study a family of low-complexity classifiers consisting of thresholding a random one-dimensional feature.
For certain classification problems (e.g., those with a large Rashomon ratio), there is a potntially large gain in generalization properties by selecting parameters at random.
arXiv Detail & Related papers (2021-08-11T23:14:46Z) - Learning optimally separated class-specific subspace representations
using convolutional autoencoder [0.0]
We propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations.
To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets.
arXiv Detail & Related papers (2021-05-19T00:45:34Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Linear Classifiers in Mixed Constant Curvature Spaces [40.82908295137667]
We address the problem of linear classification in a product space form -- a mix of Euclidean, spherical, and hyperbolic spaces.
We prove that linear classifiers in $d$-dimensional constant curvature spaces can shatter exactly $d+1$ points.
We describe a novel perceptron classification algorithm, and establish rigorous convergence results.
arXiv Detail & Related papers (2021-02-19T23:29:03Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - High-Dimensional Quadratic Discriminant Analysis under Spiked Covariance
Model [101.74172837046382]
We propose a novel quadratic classification technique, the parameters of which are chosen such that the fisher-discriminant ratio is maximized.
Numerical simulations show that the proposed classifier not only outperforms the classical R-QDA for both synthetic and real data but also requires lower computational complexity.
arXiv Detail & Related papers (2020-06-25T12:00:26Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.