Quantum State Assignment Flows
- URL: http://arxiv.org/abs/2307.00075v1
- Date: Fri, 30 Jun 2023 18:29:14 GMT
- Title: Quantum State Assignment Flows
- Authors: Jonathan Schwarz, Jonas Cassel, Bastian Boll, Martin G\"arttner, Peter
Albers, Christoph Schn\"orr
- Abstract summary: This paper flows for assignment as state spaces representing data associated with layers of an underlying weighted graph.
The novel approach for data representation and analysis, including the representation of data across the graph by and entanglementization, is presented.
- Score: 3.7886425043810905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces assignment flows for density matrices as state spaces
for representing and analyzing data associated with vertices of an underlying
weighted graph. Determining an assignment flow by geometric integration of the
defining dynamical system causes an interaction of the non-commuting states
across the graph, and the assignment of a pure (rank-one) state to each vertex
after convergence. Adopting the Riemannian Bogoliubov-Kubo-Mori metric from
information geometry leads to closed-form local expressions which can be
computed efficiently and implemented in a fine-grained parallel manner.
Restriction to the submanifold of commuting density matrices recovers the
assignment flows for categorial probability distributions, which merely assign
labels from a finite set to each data point. As shown for these flows in our
prior work, the novel class of quantum state assignment flows can also be
characterized as Riemannian gradient flows with respect to a non-local
non-convex potential, after proper reparametrization and under mild conditions
on the underlying weight function. This weight function generates the
parameters of the layers of a neural network, corresponding to and generated by
each step of the geometric integration scheme.
Numerical results indicates and illustrate the potential of the novel
approach for data representation and analysis, including the representation of
correlations of data across the graph by entanglement and tensorization.
Related papers
- Sigma Flows for Image and Data Labeling and Learning Structured Prediction [2.4699742392289]
This paper introduces the sigma flow model for the prediction of structured labelings of data observed on Riemannian manifold.
The approach combines the Laplace-Beltrami framework for image denoising and enhancement, introduced by Sochen, Kimmel and Malladi about 25 years ago, and the assignment flow approach introduced and studied by the authors.
arXiv Detail & Related papers (2024-08-28T17:04:56Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - On Learning Gaussian Multi-index Models with Gradient Flow [57.170617397894404]
We study gradient flow on the multi-index regression problem for high-dimensional Gaussian data.
We consider a two-timescale algorithm, whereby the low-dimensional link function is learnt with a non-parametric model infinitely faster than the subspace parametrizing the low-rank projection.
arXiv Detail & Related papers (2023-10-30T17:55:28Z) - Representing Edge Flows on Graphs via Sparse Cell Complexes [6.74438532801556]
We introduce the flow representation learning problem, i.e., the problem of augmenting the observed graph by a set of cells.
We show that this problem is NP-hard and introduce an efficient approximation algorithm for its solution.
arXiv Detail & Related papers (2023-09-04T14:30:20Z) - Graphical Normalizing Flows [11.23030807455021]
Normalizing flows model complex probability distributions by combining a base distribution with a series of neural networks.
State-of-the-art architectures rely on coupling and autoregressive transformations to lift up invertible functions from scalars to vectors.
We propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure.
arXiv Detail & Related papers (2020-06-03T21:50:29Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - Assignment Flows for Data Labeling on Graphs: Convergence and Stability [69.68068088508505]
This paper establishes conditions on the weight parameters that guarantee convergence of the continuous-time assignment flow to integral assignments (labelings)
Several counter-examples illustrate that violating the conditions may entail unfavorable behavior of the assignment flow regarding contextual data classification.
arXiv Detail & Related papers (2020-02-26T15:45:38Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.