Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals
- URL: http://arxiv.org/abs/2303.05798v2
- Date: Wed, 24 May 2023 11:48:35 GMT
- Title: Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG
Signals
- Authors: Cl\'ement Bonet, Beno\^it Mal\'ezieux, Alain Rakotomamonjy, Lucas
Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty
- Abstract summary: We propose a new method to deal with distributions of covariance matrices.
We show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications.
- Score: 24.798859309715667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When dealing with electro or magnetoencephalography records, many supervised
prediction tasks are solved by working with covariance matrices to summarize
the signals. Learning with these matrices requires using Riemanian geometry to
account for their structure. In this paper, we propose a new method to deal
with distributions of covariance matrices and demonstrate its computational
efficiency on M/EEG multivariate time series. More specifically, we define a
Sliced-Wasserstein distance between measures of symmetric positive definite
matrices that comes with strong theoretical guarantees. Then, we take advantage
of its properties and kernel methods to apply this distance to brain-age
prediction from MEG data and compare it to state-of-the-art algorithms based on
Riemannian geometry. Finally, we show that it is an efficient surrogate to the
Wasserstein distance in domain adaptation for Brain Computer Interface
applications.
Related papers
- Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Understanding Matrix Function Normalizations in Covariance Pooling through the Lens of Riemannian Geometry [63.694184882697435]
Global Covariance Pooling (GCP) has been demonstrated to improve the performance of Deep Neural Networks (DNNs) by exploiting second-order statistics of high-level representations.
arXiv Detail & Related papers (2024-07-15T07:11:44Z) - Weakly supervised covariance matrices alignment through Stiefel matrices
estimation for MEG applications [64.20396555814513]
This paper introduces a novel domain adaptation technique for time series data, called Mixing model Stiefel Adaptation (MSA)
We exploit abundant unlabeled data in the target domain to ensure effective prediction by establishing pairwise correspondence with equivalent signal variances between domains.
MSA outperforms recent methods in brain-age regression with task variations using magnetoencephalography (MEG) signals from the Cam-CAN dataset.
arXiv Detail & Related papers (2024-01-24T19:04:49Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Multiplicative Updates for Online Convex Optimization over Symmetric
Cones [28.815822236291392]
We introduce the Symmetric-Cone Multiplicative Weights Update (SCMWU), a projection-free algorithm for online optimization over the trace-one slice of an arbitrary symmetric cone.
We show that SCMWU is equivalent to Follow-the-Regularized-Leader and Online Mirror Descent with symmetric-cone negative entropy as regularizer.
arXiv Detail & Related papers (2023-07-06T17:06:43Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Classification of BCI-EEG based on augmented covariance matrix [0.0]
We propose a new framework based on the augmented covariance extracted from an autoregressive model to improve motor imagery classification.
We will test our approach on several datasets and several subjects using the MOABB framework.
arXiv Detail & Related papers (2023-02-09T09:04:25Z) - Robust Geometric Metric Learning [17.855338784378]
This paper proposes new algorithms for the metric learning problem.
A general approach, called Robust Geometric Metric Learning (RGML), is then studied.
The performance of RGML is asserted on real datasets.
arXiv Detail & Related papers (2022-02-23T14:55:08Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.