Convolutional Neural Networks on Manifolds: From Graphs and Back
- URL: http://arxiv.org/abs/2210.00376v1
- Date: Sat, 1 Oct 2022 21:17:39 GMT
- Title: Convolutional Neural Networks on Manifolds: From Graphs and Back
- Authors: Zhiyang Wang and Luana Ruiz and Alejandro Ribeiro
- Abstract summary: We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
- Score: 122.06927400759021
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Geometric deep learning has gained much attention in recent years due to more
available data acquired from non-Euclidean domains. Some examples include point
clouds for 3D models and wireless sensor networks in communications. Graphs are
common models to connect these discrete data points and capture the underlying
geometric structure. With the large amount of these geometric data, graphs with
arbitrarily large size tend to converge to a limit model -- the manifold. Deep
neural network architectures have been proved as a powerful technique to solve
problems based on these data residing on the manifold. In this paper, we
propose a manifold neural network (MNN) composed of a bank of manifold
convolutional filters and point-wise nonlinearities. We define a manifold
convolution operation which is consistent with the discrete graph convolution
by discretizing in both space and time domains. To sum up, we focus on the
manifold model as the limit of large graphs and construct MNNs, while we can
still bring back graph neural networks by the discretization of MNNs. We carry
out experiments based on point-cloud dataset to showcase the performance of our
proposed MNNs.
Related papers
- A Theoretical Study of Neural Network Expressive Power via Manifold Topology [9.054396245059555]
A prevalent assumption regarding real-world data is that it lies on or close to a low-dimensional manifold.
In this study, we investigate network expressive power in terms of the latent data manifold.
We present a size upper bound of ReLU neural networks.
arXiv Detail & Related papers (2024-10-21T22:10:24Z) - Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Manifold Filter-Combine Networks [22.19399386945317]
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs)
This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular graph neural networks (GNNs)
arXiv Detail & Related papers (2023-07-08T23:19:53Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - A Convergence Rate for Manifold Neural Networks [6.428026202398116]
We introduce a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator.
We build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold.
We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.
arXiv Detail & Related papers (2022-12-23T22:44:25Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Geom-GCN: Geometric Graph Convolutional Networks [15.783571061254847]
We propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.
We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
arXiv Detail & Related papers (2020-02-13T00:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.