One-step replica symmetry breaking in the language of tensor networks
- URL: http://arxiv.org/abs/2306.15004v1
- Date: Mon, 26 Jun 2023 18:42:51 GMT
- Title: One-step replica symmetry breaking in the language of tensor networks
- Authors: Nicola Pancotti and Johnnie Gray
- Abstract summary: We develop an exact mapping between the one-step replica symmetry breaking cavity method and tensor networks.
The two schemes come with complementary mathematical and numerical toolboxes that could be leveraged to improve the respective states of the art.
- Score: 0.913755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an exact mapping between the one-step replica symmetry breaking
cavity method and tensor networks. The two schemes come with complementary
mathematical and numerical toolboxes that could be leveraged to improve the
respective states of the art. As an example, we construct a tensor-network
representation of Survey Propagation, one of the best deterministic k-SAT
solvers. The resulting algorithm outperforms any existent tensor-network solver
by several orders of magnitude. We comment on the generality of these ideas,
and we show how to extend them to the context of quantum tensor networks.
Related papers
- Compressing multivariate functions with tree tensor networks [0.0]
One-dimensional tensor networks are increasingly being used as a numerical ansatz for continuum functions.
We show how more structured tree tensor networks offer a significantly more efficient ansatz than the commonly used tensor train.
arXiv Detail & Related papers (2024-10-04T16:20:52Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Fermionic tensor network methods [0.0]
We show how fermionic statistics can be naturally incorporated in tensor networks on arbitrary graphs through the use of graded Hilbert spaces.
This formalism allows to use tensor network methods for fermionic lattice systems in a local way, avoiding the need of a Jordan-Wigner transformation or the explicit tracking of leg crossings by swap gates in 2D tensor networks.
arXiv Detail & Related papers (2024-04-22T22:22:05Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Stack operation of tensor networks [10.86105335102537]
We propose a mathematically rigorous definition for the tensor network stack approach.
We illustrate the main ideas with the matrix product states based machine learning as an example.
arXiv Detail & Related papers (2022-03-28T12:45:13Z) - Dimension of Tensor Network varieties [68.8204255655161]
We determine an upper bound on the dimension of the tensor network variety.
A refined upper bound is given in cases relevant for applications such as varieties of matrix product states and projected entangled pairs states.
arXiv Detail & Related papers (2021-01-08T18:24:50Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Riemannian optimization of isometric tensor networks [0.0]
We show how gradient-based optimization methods can be used to optimize tensor networks of isometries to represent e.g. ground states of 1D quantum Hamiltonians.
We apply these methods in the context of infinite MPS and MERA, and show benchmark results in which they outperform the best previously-known optimization methods.
arXiv Detail & Related papers (2020-07-07T17:19:05Z) - Optimization at the boundary of the tensor network variety [2.1839191255085995]
tensor network states form a variational ansatz class widely used in the study of quantum many-body systems.
Recent work has shown that states on the boundary of this variety can yield more efficient representations for states of physical interest.
We show how to optimize over this class in order to find ground states of local Hamiltonians.
arXiv Detail & Related papers (2020-06-30T16:58:55Z) - Understanding Graph Neural Networks with Generalized Geometric
Scattering Transforms [67.88675386638043]
The scattering transform is a multilayered wavelet-based deep learning architecture that acts as a model of convolutional neural networks.
We introduce windowed and non-windowed geometric scattering transforms for graphs based upon a very general class of asymmetric wavelets.
We show that these asymmetric graph scattering transforms have many of the same theoretical guarantees as their symmetric counterparts.
arXiv Detail & Related papers (2019-11-14T17:23:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.