Manifold learning via quantum dynamics
- URL: http://arxiv.org/abs/2112.11161v1
- Date: Mon, 20 Dec 2021 18:54:03 GMT
- Title: Manifold learning via quantum dynamics
- Authors: Akshat Kumar, Mohan Sarovar
- Abstract summary: We introduce an algorithm for computing geodesics on sampled models based on simulation of quantum dynamics on a graph embedding of the sampled data.
Our approach exploits classic results in semiclassical analysis and the quantum-classical correspondence, and forms a basis for techniques to learn the manifold from which a dataset is sampled.
- Score: 22.869267883760287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an algorithm for computing geodesics on sampled manifolds that
relies on simulation of quantum dynamics on a graph embedding of the sampled
data. Our approach exploits classic results in semiclassical analysis and the
quantum-classical correspondence, and forms a basis for techniques to learn the
manifold from which a dataset is sampled, and subsequently for nonlinear
dimensionality reduction of high-dimensional datasets. We illustrate the new
algorithm with data sampled from model manifolds and also by a clustering
demonstration based on COVID-19 mobility data. Finally, our method reveals
interesting connections between the discretization provided by data sampling
and quantization.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Joint Distributional Learning via Cramer-Wold Distance [0.7614628596146602]
We introduce the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets.
We also introduce a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution.
arXiv Detail & Related papers (2023-10-25T05:24:23Z) - A Geometric Perspective on Diffusion Models [60.69328526215776]
We inspect the ODE-based sampling of a popular variance-exploding SDE and reveal several intriguing structures of its sampling dynamics.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Shining light on data: Geometric data analysis through quantum dynamics [14.315501760755604]
We show how fine-scale features of a structure in data can be extracted from approximations to quantum mechanical processes given by data-driven graph Laplacians and localized wavepackets.
This data-driven quantization procedure leads to a novel, yet natural uncertainty principle for data analysis induced by limited data.
We illustrate the new approach with algorithms and several applications to real-world data, including the learning of patterns and anomalies in social distancing and mobility behavior during the COVID-19 pandemic.
arXiv Detail & Related papers (2022-12-01T17:38:01Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Learning Low-Dimensional Nonlinear Structures from High-Dimensional
Noisy Data: An Integral Operator Approach [5.975670441166475]
We propose a kernel-spectral embedding algorithm for learning low-dimensional nonlinear structures from high-dimensional and noisy observations.
The algorithm employs an adaptive bandwidth selection procedure which does not rely on prior knowledge of the underlying manifold.
The obtained low-dimensional embeddings can be further utilized for downstream purposes such as data visualization, clustering and prediction.
arXiv Detail & Related papers (2022-02-28T22:46:34Z) - Manifold embedding data-driven mechanics [0.0]
This article introduces a new data-driven approach that leverages a manifold embedding generated by the invertible neural network.
We achieve this by training a deep neural network to globally map data from the manifold onto a lower-dimensional Euclidean vector space.
arXiv Detail & Related papers (2021-12-18T04:38:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.