The Manifold Scattering Transform for High-Dimensional Point Cloud Data
- URL: http://arxiv.org/abs/2206.10078v2
- Date: Sun, 21 Jan 2024 20:03:15 GMT
- Title: The Manifold Scattering Transform for High-Dimensional Point Cloud Data
- Authors: Joyce Chew, Holly R. Steach, Siddharth Viswanath, Hau-Tieng Wu,
Matthew Hirn, Deanna Needell, Smita Krishnaswamy, Michael Perlmutter
- Abstract summary: We present practical schemes for implementing the manifold scattering transform to datasets arising in naturalistic systems.
We show that our methods are effective for signal classification and manifold classification tasks.
- Score: 16.500568323161563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The manifold scattering transform is a deep feature extractor for data
defined on a Riemannian manifold. It is one of the first examples of extending
convolutional neural network-like operators to general manifolds. The initial
work on this model focused primarily on its theoretical stability and
invariance properties but did not provide methods for its numerical
implementation except in the case of two-dimensional surfaces with predefined
meshes. In this work, we present practical schemes, based on the theory of
diffusion maps, for implementing the manifold scattering transform to datasets
arising in naturalistic systems, such as single cell genetics, where the data
is a high-dimensional point cloud modeled as lying on a low-dimensional
manifold. We show that our methods are effective for signal classification and
manifold classification tasks.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - Implicit Manifold Gaussian Process Regression [49.0787777751317]
Gaussian process regression is widely used to provide well-calibrated uncertainty estimates.
It struggles with high-dimensional data because of the implicit low-dimensional manifold upon which the data actually lies.
In this paper we propose a technique capable of inferring implicit structure directly from data (labeled and unlabeled) in a fully differentiable way.
arXiv Detail & Related papers (2023-10-30T09:52:48Z) - Canonical normalizing flows for manifold learning [14.377143992248222]
We propose a canonical manifold learning flow method, where a novel objective enforces the transformation matrix to have few prominent and non-degenerate basis functions.
Canonical manifold flow yields a more efficient use of the latent space, automatically generating fewer prominent and distinct dimensions to represent data.
arXiv Detail & Related papers (2023-10-19T13:48:05Z) - A Geometric Insight into Equivariant Message Passing Neural Networks on
Riemannian Manifolds [1.0878040851638]
We argue that the metric attached to a coordinate-independent feature field should optimally preserve the principal bundle's original metric.
We obtain a message passing scheme on the manifold by discretizing the diffusion equation flow for a fixed time step.
The discretization of the higher-order diffusion process on a graph yields a new general class of equivariant GNN.
arXiv Detail & Related papers (2023-10-16T14:31:13Z) - Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes [57.396578974401734]
We introduce a principled framework for building a generative diffusion process on general manifold.
Instead of following the denoising approach of previous diffusion models, we construct a diffusion process using a mixture of bridge processes.
We develop a geometric understanding of the mixture process, deriving the drift as a weighted mean of tangent directions to the data points.
arXiv Detail & Related papers (2023-10-11T06:04:40Z) - On-Manifold Projected Gradient Descent [0.0]
This work provides a computable, direct, and mathematically rigorous approximation to the differential geometry of class manifold for high-dimensional data.
Tools are applied to the setting of neural network image classifiers, where we generate novel, on-manifold data samples.
arXiv Detail & Related papers (2023-08-23T17:50:50Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Convolutional Filtering on Sampled Manifolds [122.06927400759021]
We show that convolutional filtering on a sampled manifold converges to continuous manifold filtering.
Our findings are further demonstrated empirically on a problem of navigation control.
arXiv Detail & Related papers (2022-11-20T19:09:50Z) - Geometric Scattering on Measure Spaces [12.0756034112778]
We introduce a general, unified model for geometric scattering on measure spaces.
We consider finite measure spaces that are obtained from randomly sampling an unknown manifold.
We propose two methods for constructing a data-driven graph on which the associated graph scattering transform approximates the scattering transform on the underlying manifold.
arXiv Detail & Related papers (2022-08-17T22:40:09Z) - Inferring Manifolds From Noisy Data Using Gaussian Processes [17.166283428199634]
Most existing manifold learning algorithms replace the original data with lower dimensional coordinates.
This article proposes a new methodology for addressing these problems, allowing the estimated manifold between fitted data points.
arXiv Detail & Related papers (2021-10-14T15:50:38Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.