Manifold Filter-Combine Networks
- URL: http://arxiv.org/abs/2307.04056v3
- Date: Tue, 5 Sep 2023 18:14:55 GMT
- Title: Manifold Filter-Combine Networks
- Authors: Joyce Chew and Edward De Brouwer and Smita Krishnaswamy and Deanna
Needell and Michael Perlmutter
- Abstract summary: We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs)
This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular graph neural networks (GNNs)
- Score: 22.19399386945317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a class of manifold neural networks (MNNs) that we call Manifold
Filter-Combine Networks (MFCNs), that aims to further our understanding of
MNNs, analogous to how the aggregate-combine framework helps with the
understanding of graph neural networks (GNNs). This class includes a wide
variety of subclasses that can be thought of as the manifold analog of various
popular GNNs. We then consider a method, based on building a data-driven graph,
for implementing such networks when one does not have global knowledge of the
manifold, but merely has access to finitely many sample points. We provide
sufficient conditions for the network to provably converge to its continuum
limit as the number of sample points tends to infinity. Unlike previous work
(which focused on specific graph constructions), our rate of convergence does
not directly depend on the number of filters used. Moreover, it exhibits linear
dependence on the depth of the network rather than the exponential dependence
obtained previously. Additionally, we provide several examples of interesting
subclasses of MFCNs and of the rates of convergence that are obtained under
specific graph constructions.
Related papers
- Convergence of Manifold Filter-Combine Networks [18.590886216749528]
We introduce Manifold Filter-Combine Networks (MFCNs) to better understand manifold neural networks (MNNs)
We propose a method for implementing MFCNs on high-dimensional point clouds that relies on approximating the manifold by a sparse graph.
We prove that our method is consistent in the sense that it converges to a limit as the number of data points continuum tends to infinity.
arXiv Detail & Related papers (2024-10-18T17:40:58Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - MGDCF: Distance Learning via Markov Graph Diffusion for Neural
Collaborative Filtering [96.65234340724237]
We show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding.
We present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models.
arXiv Detail & Related papers (2022-04-05T17:24:32Z) - Understanding the Basis of Graph Convolutional Neural Networks via an
Intuitive Matched Filtering Approach [7.826806223782053]
Graph Convolutional Neural Networks (GCNN) are becoming a preferred model for data processing on irregular domains.
We show that their convolution layers effectively perform matched filtering of input data with the chosen patterns.
A numerical example guides the reader through the various steps of GCNN operation and learning both visually and numerically.
arXiv Detail & Related papers (2021-08-23T12:41:06Z) - Ranking Structured Objects with Graph Neural Networks [0.0]
RankGNNs are trained with a set of pair-wise preferences between graphs, suggesting that one of them is preferred over the other.
One practical application of this problem is drug screening, where an expert wants to find the most promising molecules in a large collection of drug candidates.
We empirically demonstrate that our proposed pair-wise RankGNN approach either significantly outperforms or at least matches the ranking performance of the naive point-wise baseline approach.
arXiv Detail & Related papers (2021-04-18T14:40:59Z) - Recurrent Graph Tensor Networks: A Low-Complexity Framework for
Modelling High-Dimensional Multi-Way Sequence [24.594587557319837]
We develop a graph filter framework for approximating the modelling of hidden states in Recurrent Neural Networks (RNNs)
The proposed framework is validated through several multi-way sequence modelling tasks and benchmarked against traditional RNNs.
We show that the proposed RGTN is capable of not only out-performing standard RNNs, but also mitigating the Curse of Dimensionality associated with traditional RNNs.
arXiv Detail & Related papers (2020-09-18T10:13:36Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.