A Convergence Rate for Manifold Neural Networks
- URL: http://arxiv.org/abs/2212.12606v2
- Date: Thu, 20 Jul 2023 18:58:11 GMT
- Title: A Convergence Rate for Manifold Neural Networks
- Authors: Joyce Chew and Deanna Needell and Michael Perlmutter
- Abstract summary: We introduce a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator.
We build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold.
We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.
- Score: 6.428026202398116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-dimensional data arises in numerous applications, and the rapidly
developing field of geometric deep learning seeks to develop neural network
architectures to analyze such data in non-Euclidean domains, such as graphs and
manifolds. Recent work by Z. Wang, L. Ruiz, and A. Ribeiro has introduced a
method for constructing manifold neural networks using the spectral
decomposition of the Laplace Beltrami operator. Moreover, in this work, the
authors provide a numerical scheme for implementing such neural networks when
the manifold is unknown and one only has access to finitely many sample points.
The authors show that this scheme, which relies upon building a data-driven
graph, converges to the continuum limit as the number of sample points tends to
infinity. Here, we build upon this result by establishing a rate of convergence
that depends on the intrinsic dimension of the manifold but is independent of
the ambient dimension. We also discuss how the rate of convergence depends on
the depth of the network and the number of filters used in each layer.
Related papers
- Exploring the Manifold of Neural Networks Using Diffusion Geometry [7.038126249994092]
We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
arXiv Detail & Related papers (2024-11-19T16:34:45Z) - A Theoretical Study of Neural Network Expressive Power via Manifold Topology [9.054396245059555]
A prevalent assumption regarding real-world data is that it lies on or close to a low-dimensional manifold.
In this study, we investigate network expressive power in terms of the latent data manifold.
We present a size upper bound of ReLU neural networks.
arXiv Detail & Related papers (2024-10-21T22:10:24Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Manifold Filter-Combine Networks [22.19399386945317]
We introduce a class of manifold neural networks (MNNs) that we call Manifold Filter-Combine Networks (MFCNs)
This class includes a wide variety of subclasses that can be thought of as the manifold analog of various popular graph neural networks (GNNs)
arXiv Detail & Related papers (2023-07-08T23:19:53Z) - Geometric Graph Filters and Neural Networks: Limit Properties and
Discriminability Trade-offs [122.06927400759021]
We study the relationship between a graph neural network (GNN) and a manifold neural network (MNN) when the graph is constructed from a set of points sampled from the manifold.
We prove non-asymptotic error bounds showing that convolutional filters and neural networks on these graphs converge to convolutional filters and neural networks on the continuous manifold.
arXiv Detail & Related papers (2023-05-29T08:27:17Z) - Predictions Based on Pixel Data: Insights from PDEs and Finite Differences [0.0]
This paper deals with approximation of time sequences where each observation is a matrix.
We show that with relatively small networks, we can represent exactly a class of numerical discretizations of PDEs based on the method of lines.
Our network architecture is inspired by those typically adopted in the approximation of time sequences.
arXiv Detail & Related papers (2023-05-01T08:54:45Z) - Tangent Bundle Convolutional Learning: from Manifolds to Cellular Sheaves and Back [84.61160272624262]
We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation.
Tangent bundle filters admit a spectral representation that generalizes the ones of scalar manifold filters, graph filters and standard convolutional filters in continuous time.
We numerically evaluate the effectiveness of the proposed architecture on various learning tasks.
arXiv Detail & Related papers (2023-03-20T17:57:15Z) - Tangent Bundle Filters and Neural Networks: from Manifolds to Cellular
Sheaves and Back [114.01902073621577]
We use the convolution to define tangent bundle filters and tangent bundle neural networks (TNNs)
We discretize TNNs both in space and time domains, showing that their discrete counterpart is a principled variant of the recently introduced Sheaf Neural Networks.
We numerically evaluate the effectiveness of the proposed architecture on a denoising task of a tangent vector field over the unit 2-sphere.
arXiv Detail & Related papers (2022-10-26T21:55:45Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - A Point-Cloud Deep Learning Framework for Prediction of Fluid Flow
Fields on Irregular Geometries [62.28265459308354]
Network learns end-to-end mapping between spatial positions and CFD quantities.
Incompress laminar steady flow past a cylinder with various shapes for its cross section is considered.
Network predicts the flow fields hundreds of times faster than our conventional CFD.
arXiv Detail & Related papers (2020-10-15T12:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.