Convolutional Neural Networks on Manifolds: From Graphs and Back
- URL: http://arxiv.org/abs/2210.00376v1
- Date: Sat, 1 Oct 2022 21:17:39 GMT
- Title: Convolutional Neural Networks on Manifolds: From Graphs and Back
- Authors: Zhiyang Wang and Luana Ruiz and Alejandro Ribeiro
- Abstract summary: We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
- Score: 122.06927400759021
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Geometric deep learning has gained much attention in recent years due to more
available data acquired from non-Euclidean domains. Some examples include point
clouds for 3D models and wireless sensor networks in communications. Graphs are
common models to connect these discrete data points and capture the underlying
geometric structure. With the large amount of these geometric data, graphs with
arbitrarily large size tend to converge to a limit model -- the manifold. Deep
neural network architectures have been proved as a powerful technique to solve
problems based on these data residing on the manifold. In this paper, we
propose a manifold neural network (MNN) composed of a bank of manifold
convolutional filters and point-wise nonlinearities. We define a manifold
convolution operation which is consistent with the discrete graph convolution
by discretizing in both space and time domains. To sum up, we focus on the
manifold model as the limit of large graphs and construct MNNs, while we can
still bring back graph neural networks by the discretization of MNNs. We carry
out experiments based on point-cloud dataset to showcase the performance of our
proposed MNNs.
Related papers
- Manifold Filter-Combine Networks [22.19399386945317]
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs)
This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular graph neural networks (GNNs)
arXiv Detail & Related papers (2023-07-08T23:19:53Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - A Convergence Rate for Manifold Neural Networks [6.428026202398116]
We introduce a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator.
We build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold.
We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.
arXiv Detail & Related papers (2022-12-23T22:44:25Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Overcoming Oversmoothness in Graph Convolutional Networks via Hybrid
Scattering Networks [11.857894213975644]
We propose a hybrid graph neural network (GNN) framework that combines traditional GCN filters with band-pass filters defined via the geometric scattering transform.
Our theoretical results establish the complementary benefits of the scattering filters to leverage structural information from the graph, while our experiments show the benefits of our method on various learning tasks.
arXiv Detail & Related papers (2022-01-22T00:47:41Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Geom-GCN: Geometric Graph Convolutional Networks [15.783571061254847]
We propose a novel geometric aggregation scheme for graph neural networks to overcome the two weaknesses.
The proposed aggregation scheme is permutation-invariant and consists of three modules, node embedding, structural neighborhood, and bi-level aggregation.
We also present an implementation of the scheme in graph convolutional networks, termed Geom-GCN, to perform transductive learning on graphs.
arXiv Detail & Related papers (2020-02-13T00:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.