Inductive Geometric Matrix Midranges
- URL: http://arxiv.org/abs/2006.01508v3
- Date: Wed, 21 Jul 2021 01:31:31 GMT
- Title: Inductive Geometric Matrix Midranges
- Authors: Graham W. Van Goffrier, Cyrus Mostajeran, Rodolphe Sepulchre
- Abstract summary: We propose a geometric method for unsupervised clustering of SPD data based on the Thompson metric.
We demonstrate the incorporation of the Thompson metric and inductive midrange into X-means and K-means++ clustering algorithms.
- Score: 1.2891210250935146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Covariance data as represented by symmetric positive definite (SPD) matrices
are ubiquitous throughout technical study as efficient descriptors of
interdependent systems. Euclidean analysis of SPD matrices, while
computationally fast, can lead to skewed and even unphysical interpretations of
data. Riemannian methods preserve the geometric structure of SPD data at the
cost of expensive eigenvalue computations. In this paper, we propose a
geometric method for unsupervised clustering of SPD data based on the Thompson
metric. This technique relies upon a novel "inductive midrange" centroid
computation for SPD data, whose properties are examined and numerically
confirmed. We demonstrate the incorporation of the Thompson metric and
inductive midrange into X-means and K-means++ clustering algorithms.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Geometric statistics with subspace structure preservation for SPD matrices [1.749935196721634]
We present a framework for the processing of SPD-valued data that preserves subspace structures.
This is achieved through the use of the Thompson geometry of the semidefinite cone.
arXiv Detail & Related papers (2024-07-02T22:22:36Z) - The Fisher-Rao geometry of CES distributions [50.50897590847961]
The Fisher-Rao information geometry allows for leveraging tools from differential geometry.
We will present some practical uses of these geometric tools in the framework of elliptical distributions.
arXiv Detail & Related papers (2023-10-02T09:23:32Z) - Structure-Preserving Transformers for Sequences of SPD Matrices [6.404789669795639]
Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types.
In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices.
We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance.
arXiv Detail & Related papers (2023-09-14T10:23:43Z) - Differential geometry with extreme eigenvalues in the positive
semidefinite cone [1.9116784879310025]
We present a route to a scalable geometric framework for the analysis and processing of SPD-valued data based on the efficient of extreme generalized eigenvalues.
We define a novel iterative mean of SPD matrices based on this geometry and prove its existence and uniqueness for a given finite collection of points.
arXiv Detail & Related papers (2023-04-14T18:37:49Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals [24.798859309715667]
We propose a new method to deal with distributions of covariance matrices.
We show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
arXiv Detail & Related papers (2023-03-10T09:08:46Z) - Quantum Algorithms for Data Representation and Analysis [68.754953879193]
We provide quantum procedures that speed-up the solution of eigenproblems for data representation in machine learning.
The power and practical use of these subroutines is shown through new quantum algorithms, sublinear in the input matrix's size, for principal component analysis, correspondence analysis, and latent semantic analysis.
Results show that the run-time parameters that do not depend on the input's size are reasonable and that the error on the computed model is small, allowing for competitive classification performances.
arXiv Detail & Related papers (2021-04-19T00:41:43Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.