Geometric Scattering on Measure Spaces
- URL: http://arxiv.org/abs/2208.08561v1
- Date: Wed, 17 Aug 2022 22:40:09 GMT
- Title: Geometric Scattering on Measure Spaces
- Authors: Joyce Chew and Matthew Hirn and Smita Krishnaswamy and Deanna Needell
and Michael Perlmutter and Holly Steach and Siddharth Viswanath and Hau-Tieng
Wu
- Abstract summary: We introduce a general, unified model for geometric scattering on measure spaces.
We consider finite measure spaces that are obtained from randomly sampling an unknown manifold.
We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold.
- Score: 12.0756034112778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The scattering transform is a multilayered, wavelet-based transform initially
introduced as a model of convolutional neural networks (CNNs) that has played a
foundational role in our understanding of these networks' stability and
invariance properties. Subsequently, there has been widespread interest in
extending the success of CNNs to data sets with non-Euclidean structure, such
as graphs and manifolds, leading to the emerging field of geometric deep
learning. In order to improve our understanding of the architectures used in
this new field, several papers have proposed generalizations of the scattering
transform for non-Euclidean data structures such as undirected graphs and
compact Riemannian manifolds without boundary.
In this paper, we introduce a general, unified model for geometric scattering
on measure spaces. Our proposed framework includes previous work on geometric
scattering as special cases but also applies to more general settings such as
directed graphs, signed graphs, and manifolds with boundary. We propose a new
criterion that identifies to which groups a useful representation should be
invariant and show that this criterion is sufficient to guarantee that the
scattering transform has desirable stability and invariance properties.
Additionally, we consider finite measure spaces that are obtained from randomly
sampling an unknown manifold. We propose two methods for constructing a
data-driven graph on which the associated graph scattering transform
approximates the scattering transform on the underlying manifold. Moreover, we
use a diffusion-maps based approach to prove quantitative estimates on the rate
of convergence of one of these approximations as the number of sample points
tends to infinity. Lastly, we showcase the utility of our method on spherical
images, directed graphs, and on high-dimensional single-cell data.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - The Manifold Scattering Transform for High-Dimensional Point Cloud Data [16.500568323161563]
We present practical schemes for implementing the manifold scattering transform to datasets arising in naturalistic systems.
We show that our methods are effective for signal classification and manifold classification tasks.
arXiv Detail & Related papers (2022-06-21T02:15:00Z) - Geometric variational inference [0.0]
Variational Inference (VI) or Markov-Chain Monte-Carlo (MCMC) techniques are used to go beyond point estimates.
This work proposes geometric Variational Inference (geoVI), a method based on Riemannian geometry and the Fisher information metric.
The distribution, expressed in the coordinate system induced by the transformation, takes a particularly simple form that allows for an accurate variational approximation.
arXiv Detail & Related papers (2021-05-21T17:18:50Z) - The multilayer random dot product graph [6.722870980553432]
We present a comprehensive extension of the latent position network model known as the random dot product graph.
We propose a method for jointly embedding submatrices into a suitable latent space.
Empirical improvements in link prediction over single graph embeddings are exhibited in a cyber-security example.
arXiv Detail & Related papers (2020-07-20T20:31:39Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z) - Understanding Graph Neural Networks with Generalized Geometric
Scattering Transforms [67.88675386638043]
The scattering transform is a multilayered wavelet-based deep learning architecture that acts as a model of convolutional neural networks.
We introduce windowed and non-windowed geometric scattering transforms for graphs based upon a very general class of asymmetric wavelets.
We show that these asymmetric graph scattering transforms have many of the same theoretical guarantees as their symmetric counterparts.
arXiv Detail & Related papers (2019-11-14T17:23:06Z) - Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds [9.341436585977913]
Similar to the Euclidean scattering transform, the geometric scattering transform is based on a cascade of wavelet filters and pointwise nonlinearities.
Empirical results demonstrate its utility on several geometric learning tasks.
arXiv Detail & Related papers (2019-05-24T21:19:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.