Shining light on data: Geometric data analysis through quantum dynamics
- URL: http://arxiv.org/abs/2212.00682v1
- Date: Thu, 1 Dec 2022 17:38:01 GMT
- Title: Shining light on data: Geometric data analysis through quantum dynamics
- Authors: Akshat Kumar, Mohan Sarovar
- Abstract summary: We show how fine-scale features of a structure in data can be extracted from approximations to quantum mechanical processes given by data-driven graph Laplacians and localized wavepackets.
This data-driven quantization procedure leads to a novel, yet natural uncertainty principle for data analysis induced by limited data.
We illustrate the new approach with algorithms and several applications to real-world data, including the learning of patterns and anomalies in social distancing and mobility behavior during the COVID-19 pandemic.
- Score: 14.315501760755604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Experimental sciences have come to depend heavily on our ability to organize,
interpret and analyze high-dimensional datasets produced from observations of a
large number of variables governed by natural processes. Natural laws,
conservation principles, and dynamical structure introduce intricate
inter-dependencies among these observed variables, which in turn yield
geometric structure, with fewer degrees of freedom, on the dataset. We show how
fine-scale features of this structure in data can be extracted from
\emph{discrete} approximations to quantum mechanical processes given by
data-driven graph Laplacians and localized wavepackets. This data-driven
quantization procedure leads to a novel, yet natural uncertainty principle for
data analysis induced by limited data. We illustrate the new approach with
algorithms and several applications to real-world data, including the learning
of patterns and anomalies in social distancing and mobility behavior during the
COVID-19 pandemic.
Related papers
- Diagnosing quantum transport from wave function snapshots [0.0]
We study nonequilibrium quantum dynamics of spin chains by employing principal component analysis (PCA) on data sets of wave function snapshots.
Our investigations on several interacting spin chains featuring distinct spin or energy transport reveal that the growth of data information spreading follows the same dynamical exponents as that of the underlying quantum transport of spin or energy.
arXiv Detail & Related papers (2024-07-12T08:40:32Z) - Dynamical Regimes of Diffusion Models [14.797301819675454]
We study generative diffusion models in the regime where the dimension of space and the number of data are large.
Our analysis reveals three distinct dynamical regimes during the backward generative diffusion process.
The dependence of the collapse time on the dimension and number of data provides a thorough characterization of the curse of dimensionality for diffusion models.
arXiv Detail & Related papers (2024-02-28T17:19:26Z) - Joint Distributional Learning via Cramer-Wold Distance [0.7614628596146602]
We introduce the Cramer-Wold distance regularization, which can be computed in a closed-form, to facilitate joint distributional learning for high-dimensional datasets.
We also introduce a two-step learning method to enable flexible prior modeling and improve the alignment between the aggregated posterior and the prior distribution.
arXiv Detail & Related papers (2023-10-25T05:24:23Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - A Causality-Based Learning Approach for Discovering the Underlying
Dynamics of Complex Systems from Partial Observations with Stochastic
Parameterization [1.2882319878552302]
This paper develops a new iterative learning algorithm for complex turbulent systems with partial observations.
It alternates between identifying model structures, recovering unobserved variables, and estimating parameters.
Numerical experiments show that the new algorithm succeeds in identifying the model structure and providing suitable parameterizations for many complex nonlinear systems.
arXiv Detail & Related papers (2022-08-19T00:35:03Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Manifold learning via quantum dynamics [22.869267883760287]
We introduce an algorithm for computing geodesics on sampled models based on simulation of quantum dynamics on a graph embedding of the sampled data.
Our approach exploits classic results in semiclassical analysis and the quantum-classical correspondence, and forms a basis for techniques to learn the manifold from which a dataset is sampled.
arXiv Detail & Related papers (2021-12-20T18:54:03Z) - Learning Stochastic Behaviour from Aggregate Data [52.012857267317784]
Learning nonlinear dynamics from aggregate data is a challenging problem because the full trajectory of each individual is not available.
We propose a novel method using the weak form of Fokker Planck Equation (FPE) to describe the density evolution of data in a sampled form.
In such a sample-based framework we are able to learn the nonlinear dynamics from aggregate data without explicitly solving the partial differential equation (PDE) FPE.
arXiv Detail & Related papers (2020-02-10T03:20:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.