High-Order Multilinear Discriminant Analysis via Order-$\textit{n}$
Tensor Eigendecomposition
- URL: http://arxiv.org/abs/2205.09191v1
- Date: Wed, 18 May 2022 19:49:54 GMT
- Title: High-Order Multilinear Discriminant Analysis via Order-$\textit{n}$
Tensor Eigendecomposition
- Authors: Cagri Ozdemir, Randy C. Hoover, Kyle Caudle, and Karen Braman
- Abstract summary: This paper presents a new approach to tensor-based multilinear discriminant analysis referred to as High-Order Multilinear Discriminant Analysis (HOMLDA)
Our proposed approach provides improved classification performance with respect to the current Tucker decomposition-based supervised learning methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Higher-order data with high dimensionality is of immense importance in many
areas of machine learning, computer vision, and video analytics.
Multidimensional arrays (commonly referred to as tensors) are used for
arranging higher-order data structures while keeping the natural representation
of the data samples. In the past decade, great efforts have been made to extend
the classic linear discriminant analysis for higher-order data classification
generally referred to as multilinear discriminant analysis (MDA). Most of the
existing approaches are based on the Tucker decomposition and $\textit{n}$-mode
tensor-matrix products. The current paper presents a new approach to
tensor-based multilinear discriminant analysis referred to as High-Order
Multilinear Discriminant Analysis (HOMLDA). This approach is based upon the
tensor decomposition where an order-$\textit{n}$ tensor can be written as a
product of order-$\textit{n}$ tensors and has a natural extension to
traditional linear discriminant analysis (LDA). Furthermore, the resulting
framework, HOMLDA, might produce a within-class scatter tensor that is close to
singular. Thus, computing the inverse inaccurately may distort the discriminant
analysis. To address this problem, an improved method referred to as Robust
High-Order Multilinear Discriminant Analysis (RHOMLDA) is introduced.
Experimental results on multiple data sets illustrate that our proposed
approach provides improved classification performance with respect to the
current Tucker decomposition-based supervised learning methods.
Related papers
- High-Dimensional Tensor Discriminant Analysis with Incomplete Tensors [5.745276598549783]
We introduce a novel approach to tensor classification with incomplete data, framed within high-dimensional linear discriminant analysis.
Our method demonstrates excellent performance in simulations and real data analysis, even with significant proportions of missing data.
arXiv Detail & Related papers (2024-10-18T18:00:16Z) - Optimal Matrix-Mimetic Tensor Algebras via Variable Projection [0.0]
Matrix mimeticity arises from interpreting tensors as operators that can be multiplied, factorized, and analyzed analogous to matrices.
We learn optimal linear mappings and corresponding tensor representations without relying on prior knowledge of the data.
We provide original theory of uniqueness of the transformation and convergence analysis of our variable-projection-based algorithm.
arXiv Detail & Related papers (2024-06-11T04:52:23Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Large-scale gradient-based training of Mixtures of Factor Analyzers [67.21722742907981]
This article contributes both a theoretical analysis as well as a new method for efficient high-dimensional training by gradient descent.
We prove that MFA training and inference/sampling can be performed based on precision matrices, which does not require matrix inversions after training is completed.
Besides the theoretical analysis and matrices, we apply MFA to typical image datasets such as SVHN and MNIST, and demonstrate the ability to perform sample generation and outlier detection.
arXiv Detail & Related papers (2023-08-26T06:12:33Z) - Scalable tensor methods for nonuniform hypergraphs [0.18434042562191813]
A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice.
We develop tensor times same vector (TTSV) algorithms which improve complexity from $O(nr)$ to a low-degree in $r$.
We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms.
arXiv Detail & Related papers (2023-06-30T17:41:58Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - HyperNTF: A Hypergraph Regularized Nonnegative Tensor Factorization for
Dimensionality Reduction [2.1485350418225244]
We propose a novel method, called Hypergraph Regularized Nonnegative Factorization (HyperNTF)
HyperNTF can preserve nonnegativity in tensor factorization, and uncover the higher-order relationship among the nearest neighborhoods.
Experiments show that HyperNTF robustly outperforms state-of-the-art algorithms in clustering analysis.
arXiv Detail & Related papers (2021-01-18T01:38:47Z) - Information-Theoretic Limits for the Matrix Tensor Product [8.206394018475708]
This paper studies a high-dimensional inference problem involving the matrix tensor product of random matrices.
On the technical side, this paper introduces some new techniques for the analysis of high-dimensional matrix-preserving signals.
arXiv Detail & Related papers (2020-05-22T17:03:48Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z) - Saliency-based Weighted Multi-label Linear Discriminant Analysis [101.12909759844946]
We propose a new variant of Linear Discriminant Analysis (LDA) to solve multi-label classification tasks.
The proposed method is based on a probabilistic model for defining the weights of individual samples.
The Saliency-based weighted Multi-label LDA approach is shown to lead to performance improvements in various multi-label classification problems.
arXiv Detail & Related papers (2020-04-08T19:40:53Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.