Adaptive Riemannian Metrics on SPD Manifolds
- URL: http://arxiv.org/abs/2303.15477v3
- Date: Thu, 18 May 2023 20:09:34 GMT
- Title: Adaptive Riemannian Metrics on SPD Manifolds
- Authors: Ziheng Chen, Yue Song, Tianyang Xu, Zhiwu Huang, Xiao-Jun Wu, Nicu
Sebe
- Abstract summary: Symmetric Positive Definite (SPD) matrices have received wide attention in machine learning due to their intrinsic capacity of encoding underlying structural correlation in data.
Existing fixed metric tensors might lead to sub-optimal performance for SPD matrices learning, especially for SPD neural networks.
- Score: 67.48576298756996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symmetric Positive Definite (SPD) matrices have received wide attention in
machine learning due to their intrinsic capacity of encoding underlying
structural correlation in data. To reflect the non-Euclidean geometry of SPD
manifolds, many successful Riemannian metrics have been proposed. However,
existing fixed metric tensors might lead to sub-optimal performance for SPD
matrices learning, especially for SPD neural networks. To remedy this
limitation, we leverage the idea of pullback and propose adaptive Riemannian
metrics for SPD manifolds. Moreover, we present comprehensive theories for our
metrics. Experiments on three datasets demonstrate that equipped with the
proposed metrics, SPD networks can exhibit superior performance.
Related papers
- A Lie Group Approach to Riemannian Batch Normalization [59.48083303101632]
This paper establishes a unified framework for normalization techniques on Lie groups.
We focus on Symmetric Positive Definite (SPD), which possess three distinct types of Lie group structures.
Specific normalization layers induced by these Lie groups are then proposed for SPD neural networks.
arXiv Detail & Related papers (2024-03-17T16:24:07Z) - Riemannian Self-Attention Mechanism for SPD Networks [34.794770395408335]
An SPD manifold self-attention mechanism (SMSA) is proposed in this paper.
An SMSA-based geometric learning module (SMSA-GL) is designed for the sake of improving the discrimination of structured representations.
arXiv Detail & Related papers (2023-11-28T12:34:46Z) - The Fisher-Rao geometry of CES distributions [50.50897590847961]
The Fisher-Rao information geometry allows for leveraging tools from differential geometry.
We will present some practical uses of these geometric tools in the framework of elliptical distributions.
arXiv Detail & Related papers (2023-10-02T09:23:32Z) - Riemannian Multinomial Logistics Regression for SPD Neural Networks [60.11063972538648]
We propose a new type of deep neural network for Symmetric Positive Definite (SPD) matrices.
Our framework offers a novel intrinsic explanation for the most popular LogEig classifier in existing SPD networks.
The effectiveness of our method is demonstrated in three applications: radar recognition, human action recognition, and electroencephalography (EEG) classification.
arXiv Detail & Related papers (2023-05-18T20:12:22Z) - DreamNet: A Deep Riemannian Network based on SPD Manifold Learning for
Visual Classification [36.848148506610364]
We propose a new architecture for SPD matrix learning.
To enrich the deep representations, we adopt SPDNet as the backbone.
We then insert several residual-like blocks with shortcut connections to augment the representational capacity of SRAE.
arXiv Detail & Related papers (2022-06-16T07:15:20Z) - Neural Operator with Regularity Structure for Modeling Dynamics Driven
by SPDEs [70.51212431290611]
Partial differential equations (SPDEs) are significant tools for modeling dynamics in many areas including atmospheric sciences and physics.
We propose the Neural Operator with Regularity Structure (NORS) which incorporates the feature vectors for modeling dynamics driven by SPDEs.
We conduct experiments on various of SPDEs including the dynamic Phi41 model and the 2d Navier-Stokes equation.
arXiv Detail & Related papers (2022-04-13T08:53:41Z) - On Riemannian Optimization over Positive Definite Matrices with the
Bures-Wasserstein Geometry [45.1944007785671]
We comparatively analyze the Bures-Wasserstein (BW) geometry with the popular Affine-Invariant (AI) geometry.
We build on an observation that the BW metric has a linear dependence on SPD matrices in contrast to the quadratic dependence of the AI metric.
We show that the BW geometry has a non-negative curvature, which further improves convergence rates of algorithms over the non-positively curved AI geometry.
arXiv Detail & Related papers (2021-06-01T07:39:19Z) - Learning Log-Determinant Divergences for Positive Definite Matrices [47.61701711840848]
In this paper, we propose to learn similarity measures in a data-driven manner.
We capitalize on the alphabeta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta.
Our key idea is to cast these parameters in a continuum and learn them from data.
arXiv Detail & Related papers (2021-04-13T19:09:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.