Riemannian Local Mechanism for SPD Neural Networks
- URL: http://arxiv.org/abs/2201.10145v3
- Date: Fri, 19 May 2023 07:40:57 GMT
- Title: Riemannian Local Mechanism for SPD Neural Networks
- Authors: Ziheng Chen, Tianyang Xu, Xiao-Jun Wu, Rui Wang, Zhiwu Huang, Josef
Kittler
- Abstract summary: We argue that it is of utmost importance to ensure the preservation of local geometric information in SPD networks.
We first analyse the convolution operator commonly used for capturing local information in Euclidean deep networks.
Based on this analysis, we define the local information in the SPD manifold and design a multi-scale submanifold block for mining local geometry.
- Score: 43.789561494266316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Symmetric Positive Definite (SPD) matrices have received wide attention
for data representation in many scientific areas. Although there are many
different attempts to develop effective deep architectures for data processing
on the Riemannian manifold of SPD matrices, very few solutions explicitly mine
the local geometrical information in deep SPD feature representations. Given
the great success of local mechanisms in Euclidean methods, we argue that it is
of utmost importance to ensure the preservation of local geometric information
in the SPD networks. We first analyse the convolution operator commonly used
for capturing local information in Euclidean deep networks from the perspective
of a higher level of abstraction afforded by category theory. Based on this
analysis, we define the local information in the SPD manifold and design a
multi-scale submanifold block for mining local geometry. Experiments involving
multiple visual tasks validate the effectiveness of our approach. The
supplement and source code can be found in
https://github.com/GitZH-Chen/MSNet.git.
Related papers
- A Lie Group Approach to Riemannian Batch Normalization [59.48083303101632]
This paper establishes a unified framework for normalization techniques on Lie groups.
We focus on Symmetric Positive Definite (SPD), which possess three distinct types of Lie group structures.
Specific normalization layers induced by these Lie groups are then proposed for SPD neural networks.
arXiv Detail & Related papers (2024-03-17T16:24:07Z) - Riemannian Self-Attention Mechanism for SPD Networks [34.794770395408335]
An SPD manifold self-attention mechanism (SMSA) is proposed in this paper.
An SMSA-based geometric learning module (SMSA-GL) is designed for the sake of improving the discrimination of structured representations.
arXiv Detail & Related papers (2023-11-28T12:34:46Z) - Riemannian Multinomial Logistics Regression for SPD Neural Networks [60.11063972538648]
We propose a new type of deep neural network for Symmetric Positive Definite (SPD) matrices.
Our framework offers a novel intrinsic explanation for the most popular LogEig classifier in existing SPD networks.
The effectiveness of our method is demonstrated in three applications: radar recognition, human action recognition, and electroencephalography (EEG) classification.
arXiv Detail & Related papers (2023-05-18T20:12:22Z) - Adaptive Log-Euclidean Metrics for SPD Matrix Learning [73.12655932115881]
We propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM)
The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks.
arXiv Detail & Related papers (2023-03-26T18:31:52Z) - Bayesian Hyperbolic Multidimensional Scaling [2.5944208050492183]
We propose a Bayesian approach to multidimensional scaling when the low-dimensional manifold is hyperbolic.
A case-control likelihood approximation allows for efficient sampling from the posterior distribution in larger data settings.
We evaluate the proposed method against state-of-the-art alternatives using simulations, canonical reference datasets, Indian village network data, and human gene expression data.
arXiv Detail & Related papers (2022-10-26T23:34:30Z) - DreamNet: A Deep Riemannian Network based on SPD Manifold Learning for
Visual Classification [36.848148506610364]
We propose a new architecture for SPD matrix learning.
To enrich the deep representations, we adopt SPDNet as the backbone.
We then insert several residual-like blocks with shortcut connections to augment the representational capacity of SRAE.
arXiv Detail & Related papers (2022-06-16T07:15:20Z) - DeepSSN: a deep convolutional neural network to assess spatial scene
similarity [11.608756441376544]
We propose a deep convolutional neural network, namely Deep Spatial Scene Network (DeepSSN), to better assess the spatial scene similarity.
We develop a prototype spatial scene search system using the proposed DeepSSN, in which the users input spatial query via sketch maps.
The proposed model is validated using multi-source conflated map data including 131,300 labeled scene samples after data augmentation.
arXiv Detail & Related papers (2022-02-07T23:53:20Z) - DeHIN: A Decentralized Framework for Embedding Large-scale Heterogeneous
Information Networks [64.62314068155997]
We present textitDecentralized Embedding Framework for Heterogeneous Information Network (DeHIN) in this paper.
DeHIN presents a context preserving partition mechanism that innovatively formulates a large HIN as a hypergraph.
Our framework then adopts a decentralized strategy to efficiently partition HINs by adopting a tree-like pipeline.
arXiv Detail & Related papers (2022-01-08T04:08:36Z) - Towards Interpretable Deep Networks for Monocular Depth Estimation [78.84690613778739]
We quantify the interpretability of a deep MDE network by the depth selectivity of its hidden units.
We propose a method to train interpretable MDE deep networks without changing their original architectures.
Experimental results demonstrate that our method is able to enhance the interpretability of deep MDE networks.
arXiv Detail & Related papers (2021-08-11T16:43:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.