Analysis of (sub-)Riemannian PDE-G-CNNs
- URL: http://arxiv.org/abs/2210.00935v4
- Date: Mon, 3 Apr 2023 11:40:16 GMT
- Title: Analysis of (sub-)Riemannian PDE-G-CNNs
- Authors: Gijs Bellaard, Daan L. J. Bon, Gautam Pai, Bart M. N. Smets, Remco
Duits
- Abstract summary: Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning.
We show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels.
We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones.
- Score: 1.9249287163937971
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group equivariant convolutional neural networks (G-CNNs) have been
successfully applied in geometric deep learning. Typically, G-CNNs have the
advantage over CNNs that they do not waste network capacity on training
symmetries that should have been hard-coded in the network. The recently
introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalises G-CNNs.
PDE-G-CNNs have the core advantages that they simultaneously 1) reduce network
complexity, 2) increase classification performance, and 3) provide geometric
interpretability. Their implementations primarily consist of linear and
morphological convolutions with kernels.
In this paper we show that the previously suggested approximative
morphological kernels do not always accurately approximate the exact kernels
accurately. More specifically, depending on the spatial anisotropy of the
Riemannian metric, we argue that one must resort to sub-Riemannian
approximations. We solve this problem by providing a new approximative kernel
that works regardless of the anisotropy. We provide new theorems with better
error estimates of the approximative kernels, and prove that they all carry the
same reflectional symmetries as the exact ones.
We test the effectiveness of multiple approximative kernels within the
PDE-G-CNN framework on two datasets, and observe an improvement with the new
approximative kernels. We report that the PDE-G-CNNs again allow for a
considerable reduction of network complexity while having comparable or better
performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have
the advantage of better geometric interpretability over G-CNNs, as the
morphological kernels are related to association fields from neurogeometry.
Related papers
- Spiking Graph Neural Network on Riemannian Manifolds [51.15400848660023]
Graph neural networks (GNNs) have become the dominant solution for learning on graphs.
Existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry.
We present a Manifold-valued Spiking GNN (MSG)
MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
arXiv Detail & Related papers (2024-10-23T15:09:02Z) - Topological Neural Networks go Persistent, Equivariant, and Continuous [6.314000948709255]
We introduce TopNets as a broad framework that subsumes and unifies various methods in the intersection of GNNs/TNNs and PH.
TopNets achieve strong performance across diverse tasks, including antibody design, molecular dynamics simulation, and drug property prediction.
arXiv Detail & Related papers (2024-06-05T11:56:54Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Learning Graph Neural Networks with Approximate Gradient Descent [24.49427608361397]
Two types of graph neural networks (GNNs) are investigated, depending on whether labels are attached to nodes or graphs.
A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed.
The proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs.
arXiv Detail & Related papers (2020-12-07T02:54:48Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Generalization and Representational Limits of Graph Neural Networks [46.20253808402385]
We prove that several important graph properties cannot be computed by graph neural networks (GNNs) that rely entirely on local information.
We provide the first data dependent generalization bounds for message passing GNNs.
Our bounds are much tighter than existing VC-dimension based guarantees for GNNs, and are comparable to Rademacher bounds for recurrent neural networks.
arXiv Detail & Related papers (2020-02-14T18:10:14Z) - PDE-based Group Equivariant Convolutional Neural Networks [1.949912057689623]
We present a PDE-based framework that generalizes Group equivariant Convolutional Neural Networks (G-CNNs)
In this framework, a network layer is seen as a set of PDE-solvers where geometrically meaningful PDE-coefficients become the layer's trainable weights.
We present experiments to demonstrate the strength of the proposed PDE-G-CNNs in increasing the performance of deep learning based imaging applications.
arXiv Detail & Related papers (2020-01-24T15:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.