Optimal Discriminant Analysis in High-Dimensional Latent Factor Models
- URL: http://arxiv.org/abs/2210.12862v1
- Date: Sun, 23 Oct 2022 21:45:53 GMT
- Title: Optimal Discriminant Analysis in High-Dimensional Latent Factor Models
- Authors: Xin Bing and Marten Wegkamp
- Abstract summary: In high-dimensional classification problems, a commonly used approach is to first project the high-dimensional features into a lower dimensional space.
We formulate a latent-variable model with a hidden low-dimensional structure to justify this two-step procedure.
We propose a computationally efficient classifier that takes certain principal components (PCs) of the observed features as projections.
- Score: 1.4213973379473654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In high-dimensional classification problems, a commonly used approach is to
first project the high-dimensional features into a lower dimensional space, and
base the classification on the resulting lower dimensional projections. In this
paper, we formulate a latent-variable model with a hidden low-dimensional
structure to justify this two-step procedure and to guide which projection to
choose. We propose a computationally efficient classifier that takes certain
principal components (PCs) of the observed features as projections, with the
number of retained PCs selected in a data-driven way. A general theory is
established for analyzing such two-step classifiers based on any projections.
We derive explicit rates of convergence of the excess risk of the proposed
PC-based classifier. The obtained rates are further shown to be optimal up to
logarithmic factors in the minimax sense. Our theory allows the lower-dimension
to grow with the sample size and is also valid even when the feature dimension
(greatly) exceeds the sample size. Extensive simulations corroborate our
theoretical findings. The proposed method also performs favorably relative to
other existing discriminant methods on three real data examples.
Related papers
- Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Representer Point Selection for Explaining Regularized High-dimensional
Models [105.75758452952357]
We introduce a class of sample-based explanations we term high-dimensional representers.
Our workhorse is a novel representer theorem for general regularized high-dimensional models.
We study the empirical performance of our proposed methods on three real-world binary classification datasets and two recommender system datasets.
arXiv Detail & Related papers (2023-05-31T16:23:58Z) - Sharp-SSL: Selective high-dimensional axis-aligned random projections
for semi-supervised learning [16.673022545571566]
We propose a new method for high-dimensional semi-supervised learning problems.
It is based on the careful aggregation of the results of a low-dimensional procedure applied to many axis-aligned random projections of the data.
arXiv Detail & Related papers (2023-04-18T17:49:02Z) - Origins of Low-dimensional Adversarial Perturbations [17.17170592140042]
We study the phenomenon of low-dimensional adversarial perturbations in classification.
The goal is to fool the classifier into flipping its decision on a nonzero fraction of inputs from a designated class.
We compute lowerbounds for the fooling rate of any subspace.
arXiv Detail & Related papers (2022-03-25T17:02:49Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Dimensionality Reduction via Diffusion Map Improved with Supervised
Linear Projection [1.7513645771137178]
In this paper, we assume the data samples lie on a single underlying smooth manifold.
We define intra-class and inter-class similarities using pairwise local kernel distances.
We aim to find a linear projection to maximize the intra-class similarities and minimize the inter-class similarities simultaneously.
arXiv Detail & Related papers (2020-08-08T04:26:07Z) - The classification for High-dimension low-sample size data [3.411873646414169]
We propose a novel classification criterion on HDLSS, tolerance, which emphasizes similarity of within-class variance on the premise of class separability.
According to this criterion, a novel linear binary classifier is designed, denoted by No-separated Data Dispersion Maximum (NPDMD)
NPDMD has several characteristics compared to the state-of-the-art classification methods.
arXiv Detail & Related papers (2020-06-21T07:04:16Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z) - Unsupervised Discretization by Two-dimensional MDL-based Histogram [0.0]
Unsupervised discretization is a crucial step in many knowledge discovery tasks.
We propose an expressive model class that allows for far more flexible partitions of two-dimensional data.
We introduce a algorithm, named PALM, which Partitions each dimension ALternately and then Merges neighboring regions.
arXiv Detail & Related papers (2020-06-02T19:19:49Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.