Machine learning with quantum field theories
- URL: http://arxiv.org/abs/2109.07730v1
- Date: Thu, 16 Sep 2021 05:42:01 GMT
- Title: Machine learning with quantum field theories
- Authors: Dimitrios Bachtis, Gert Aarts, Biagio Lucini
- Abstract summary: We will show that the $phi4$ scalar field theory on a square lattice satisfies the local Markov property and can therefore be recast as a Markov random field.
We will then derive from the $phi4$ theory machine learning algorithms and neural networks which can be viewed as generalizations of conventional neural network architectures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The precise equivalence between discretized Euclidean field theories and a
certain class of probabilistic graphical models, namely the mathematical
framework of Markov random fields, opens up the opportunity to investigate
machine learning from the perspective of quantum field theory. In this
contribution we will demonstrate, through the Hammersley-Clifford theorem, that
the $\phi^{4}$ scalar field theory on a square lattice satisfies the local
Markov property and can therefore be recast as a Markov random field. We will
then derive from the $\phi^{4}$ theory machine learning algorithms and neural
networks which can be viewed as generalizations of conventional neural network
architectures. Finally, we will conclude by presenting applications based on
the minimization of an asymmetric distance between the probability distribution
of the $\phi^{4}$ machine learning algorithms and target probability
distributions.
Related papers
- Neural network representation of quantum systems [0.0]
We provide a novel map with which a wide class of quantum mechanical systems can be cast into the form of a neural network.
Our findings bring machine learning closer to the quantum world.
arXiv Detail & Related papers (2024-03-18T02:20:22Z) - Gaussian Entanglement Measure: Applications to Multipartite Entanglement
of Graph States and Bosonic Field Theory [50.24983453990065]
An entanglement measure based on the Fubini-Study metric has been recently introduced by Cocchiarella and co-workers.
We present the Gaussian Entanglement Measure (GEM), a generalization of geometric entanglement measure for multimode Gaussian states.
By providing a computable multipartite entanglement measure for systems with a large number of degrees of freedom, we show that our definition can be used to obtain insights into a free bosonic field theory.
arXiv Detail & Related papers (2024-01-31T15:50:50Z) - Neural Network Field Theories: Non-Gaussianity, Actions, and Locality [0.0]
Both the path integral measure in field theory and ensembles of neural networks describe distributions over functions.
An expansion in $1/N$ corresponds to interactions in the field theory, but others, such as in a small breaking of the statistical independence of network parameters, can also lead to interacting theories.
arXiv Detail & Related papers (2023-07-06T18:00:01Z) - Correspondence between open bosonic systems and stochastic differential
equations [77.34726150561087]
We show that there can also be an exact correspondence at finite $n$ when the bosonic system is generalized to include interactions with the environment.
A particular system with the form of a discrete nonlinear Schr"odinger equation is analyzed in more detail.
arXiv Detail & Related papers (2023-02-03T19:17:37Z) - Quantum algorithm for Markov Random Fields structure learning by
information theoretic properties [5.263910852465186]
We propose a quantum algorithm for structure learning of an $r$-wise Markov Random Field on quantum computers.
Our work demonstrates the potential merits of quantum computation over classical computation in solving some problems in machine learning.
arXiv Detail & Related papers (2022-08-24T09:00:56Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Quantum field theories, Markov random fields and machine learning [0.0]
We will discuss how discretized Euclidean field theories can be recast within the mathematical framework of Markov random fields.
Specifically, we will demonstrate that the $phi4$ scalar field theory on a square lattice satisfies the Hammersley-Clifford theorem.
We will then discuss applications pertinent to the minimization of an asymmetric distance between the probability distribution of the $phi4$ machine learning algorithms and that of target probability distributions.
arXiv Detail & Related papers (2021-10-21T06:50:33Z) - Quantum Simulation of Conformal Field Theory [77.34726150561087]
We describe a quantum algorithm to simulate the dynamics of conformal field theories.
A full analysis of the approximation errors suggests near-term applicability.
arXiv Detail & Related papers (2021-09-29T06:44:33Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Quantum field-theoretic machine learning [0.0]
We recast the $phi4$ scalar field theory as a machine learning algorithm within the mathematically rigorous framework of Markov random fields.
Neural networks are additionally derived from the $phi4$ theory which can be viewed as generalizations of conventional neural networks.
arXiv Detail & Related papers (2021-02-18T16:12:51Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.