$k$-means on Positive Definite Matrices, and an Application to
Clustering in Radar Image Sequences
- URL: http://arxiv.org/abs/2008.03454v2
- Date: Wed, 26 Aug 2020 03:11:48 GMT
- Title: $k$-means on Positive Definite Matrices, and an Application to
Clustering in Radar Image Sequences
- Authors: Daniel Fryer, Hien Nguyen, Pascal Castellazzi
- Abstract summary: We state theoretical properties for $k$-means clustering of Symmetric Positive Definite (SPD) matrices, in a non-Euclidean space.
We then provide a novel application for this method, to time-series clustering of pixels in a sequence of Synthetic Aperture Radar images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We state theoretical properties for $k$-means clustering of Symmetric
Positive Definite (SPD) matrices, in a non-Euclidean space, that provides a
natural and favourable representation of these data. We then provide a novel
application for this method, to time-series clustering of pixels in a sequence
of Synthetic Aperture Radar images, via their finite-lag autocovariance
matrices.
Related papers
- Structure-Preserving Transformers for Sequences of SPD Matrices [6.404789669795639]
Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types.
In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices.
We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance.
arXiv Detail & Related papers (2023-09-14T10:23:43Z) - Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals [24.798859309715667]
We propose a new method to deal with distributions of covariance matrices.
We show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
arXiv Detail & Related papers (2023-03-10T09:08:46Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - An Equivalence Principle for the Spectrum of Random Inner-Product Kernel
Matrices with Polynomial Scalings [21.727073594338297]
This study is motivated by applications in machine learning and statistics.
We establish the weak limit of the empirical distribution of these random matrices in a scaling regime.
Our results can be characterized as the free additive convolution between a Marchenko-Pastur law and a semicircle law.
arXiv Detail & Related papers (2022-05-12T18:50:21Z) - Sublinear Time Approximation of Text Similarity Matrices [50.73398637380375]
We introduce a generalization of the popular Nystr"om method to the indefinite setting.
Our algorithm can be applied to any similarity matrix and runs in sublinear time in the size of the matrix.
We show that our method, along with a simple variant of CUR decomposition, performs very well in approximating a variety of similarity matrices.
arXiv Detail & Related papers (2021-12-17T17:04:34Z) - Non-PSD Matrix Sketching with Applications to Regression and
Optimization [56.730993511802865]
We present dimensionality reduction methods for non-PSD and square-roots" matrices.
We show how these techniques can be used for multiple downstream tasks.
arXiv Detail & Related papers (2021-06-16T04:07:48Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - A simpler spectral approach for clustering in directed networks [1.52292571922932]
We show that using the eigenvalue/eigenvector decomposition of the adjacency matrix is simpler than all common methods.
We provide numerical evidence for the superiority of the Gaussian Mixture clustering over the widely used k-means algorithm.
arXiv Detail & Related papers (2021-02-05T14:16:45Z) - Rank-One Measurements of Low-Rank PSD Matrices Have Small Feasible Sets [26.42912954945887]
We study the role of the constraint set in determining the solution to low-rank, positive semidefinite (PSD) matrix sensing problems.
We demonstrate practical implications by applying conic projection methods for PSD matrix recovery without incorporating low-rank regularization.
arXiv Detail & Related papers (2020-12-17T17:23:27Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.