Learning Log-Determinant Divergences for Positive Definite Matrices
- URL: http://arxiv.org/abs/2104.06461v1
- Date: Tue, 13 Apr 2021 19:09:43 GMT
- Title: Learning Log-Determinant Divergences for Positive Definite Matrices
- Authors: Anoop Cherian, Panagiotis Stanitsas, Jue Wang, Mehrtash Harandi,
Vassilios Morellas, Nikolaos Papanikolopoulos
- Abstract summary: In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
- Score: 47.61701711840848
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Representations in the form of Symmetric Positive Definite (SPD) matrices
have been popularized in a variety of visual learning applications due to their
demonstrated ability to capture rich second-order statistics of visual data.
There exist several similarity measures for comparing SPD matrices with
documented benefits. However, selecting an appropriate measure for a given
problem remains a challenge and in most cases, is the result of a
trial-and-error process. In this paper, we propose to learn similarity measures
in a data-driven manner. To this end, we capitalize on the \alpha\beta-log-det
divergence, which is a meta-divergence parametrized by scalars \alpha and
\beta, subsuming a wide family of popular information divergences on SPD
matrices for distinct and discrete values of these parameters. Our key idea is
to cast these parameters in a continuum and learn them from data. We
systematically extend this idea to learn vector-valued parameters, thereby
increasing the expressiveness of the underlying non-linear measure. We conjoin
the divergence learning problem with several standard tasks in machine
learning, including supervised discriminative dictionary learning and
unsupervised SPD matrix clustering. We present Riemannian gradient descent
schemes for optimizing our formulations efficiently, and show the usefulness of
our method on eight standard computer vision tasks.
Related papers
- The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Synergistic eigenanalysis of covariance and Hessian matrices for enhanced binary classification [72.77513633290056]
We present a novel approach that combines the eigenanalysis of a covariance matrix evaluated on a training set with a Hessian matrix evaluated on a deep learning model.
Our method captures intricate patterns and relationships, enhancing classification performance.
arXiv Detail & Related papers (2024-02-14T16:10:42Z) - Identifying Systems with Symmetries using Equivariant Autoregressive
Reservoir Computers [0.0]
Investigation focuses on identifying systems with symmetries using equivariant autoregressive reservoir computers.
General results in structured matrix approximation theory are presented, exploring a two-fold approach.
arXiv Detail & Related papers (2023-11-16T02:32:26Z) - Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals [24.798859309715667]
We propose a new method to deal with distributions of covariance matrices.
We show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
arXiv Detail & Related papers (2023-03-10T09:08:46Z) - Classification of BCI-EEG based on augmented covariance matrix [0.0]
We propose a new framework based on the augmented covariance extracted from an autoregressive model to improve motor imagery classification.
We will test our approach on several datasets and several subjects using the MOABB framework.
arXiv Detail & Related papers (2023-02-09T09:04:25Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Accurate and fast matrix factorization for low-rank learning [4.435094091999926]
We tackle two important challenges related to the accurate partial singular value decomposition (SVD) and numerical rank estimation of a huge matrix.
We use the concepts of Krylov subspaces such as the Golub-Kahan bidiagonalization process as well as Ritz vectors to achieve these goals.
arXiv Detail & Related papers (2021-04-21T22:35:02Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - WISDoM: characterizing neurological timeseries with the Wishart
distribution [0.0]
WISDoM is a new framework for the quantification of deviation of symmetric positive-definite matrices associated to experimental samples.
We show the application of the method in two different scenarios.
arXiv Detail & Related papers (2020-01-28T14:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.