Robust Geometric Metric Learning
- URL: http://arxiv.org/abs/2202.11550v1
- Date: Wed, 23 Feb 2022 14:55:08 GMT
- Title: Robust Geometric Metric Learning
- Authors: Antoine Collas, Arnaud Breloy, Guillaume Ginolhac, Chengfang Ren,
Jean-Philippe Ovarlez
- Abstract summary: This paper proposes new algorithms for the metric learning problem.
A general approach, called Robust Geometric Metric Learning (RGML), is then studied.
The performance of RGML is asserted on real datasets.
- Score: 17.855338784378
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes new algorithms for the metric learning problem. We start
by noticing that several classical metric learning formulations from the
literature can be viewed as modified covariance matrix estimation problems.
Leveraging this point of view, a general approach, called Robust Geometric
Metric Learning (RGML), is then studied. This method aims at simultaneously
estimating the covariance matrix of each class while shrinking them towards
their (unknown) barycenter. We focus on two specific costs functions: one
associated with the Gaussian likelihood (RGML Gaussian), and one with Tyler's M
-estimator (RGML Tyler). In both, the barycenter is defined with the Riemannian
distance, which enjoys nice properties of geodesic convexity and affine
invariance. The optimization is performed using the Riemannian geometry of
symmetric positive definite matrices and its submanifold of unit determinant.
Finally, the performance of RGML is asserted on real datasets. Strong
performance is exhibited while being robust to mislabeled data.
Related papers
- Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Intrinsic Bayesian Cramér-Rao Bound with an Application to Covariance Matrix Estimation [49.67011673289242]
This paper presents a new performance bound for estimation problems where the parameter to estimate lies in a smooth manifold.
It induces a geometry for the parameter manifold, as well as an intrinsic notion of the estimation error measure.
arXiv Detail & Related papers (2023-11-08T15:17:13Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals [24.798859309715667]
We propose a new method to deal with distributions of covariance matrices.
We show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
arXiv Detail & Related papers (2023-03-10T09:08:46Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Lagrangian Manifold Monte Carlo on Monge Patches [5.586191108738564]
We show how Lagrangian Monte Carlo in this metric efficiently explores the target distributions.
Our metric only requires first-order information and has fast inverse and determinants.
arXiv Detail & Related papers (2022-02-01T21:01:22Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Probabilistic Learning Vector Quantization on Manifold of Symmetric
Positive Definite Matrices [3.727361969017079]
We develop a new classification method for manifold-valued data in the framework of probabilistic learning vector quantization.
In this paper, we generalize the probabilistic learning vector quantization algorithm for data points living on the manifold of symmetric positive definite matrices.
Empirical investigations on synthetic data, image data, and motor imagery EEG data demonstrate the superior performance of the proposed method.
arXiv Detail & Related papers (2021-02-01T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.