Representing Arbitrary Ground States of Toric Code by Restricted Boltzmann Machine
- URL: http://arxiv.org/abs/2407.01451v2
- Date: Mon, 15 Jul 2024 21:40:20 GMT
- Title: Representing Arbitrary Ground States of Toric Code by Restricted Boltzmann Machine
- Authors: Penghua Chen, Bowen Yan, Shawn X. Cui,
- Abstract summary: We analyze the representability of toric code ground states by Restricted Boltzmann Machine with only local connections between hidden and visible neurons.
We modify the Restricted Boltzmann Machine to accommodate arbitrary ground states by introducing essential non-local connections efficiently.
- Score: 0.23408308015481666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We systematically analyze the representability of toric code ground states by Restricted Boltzmann Machine with only local connections between hidden and visible neurons. This analysis is pivotal for evaluating the model's capability to represent diverse ground states, thus enhancing our understanding of its strengths and weaknesses. Subsequently, we modify the Restricted Boltzmann Machine to accommodate arbitrary ground states by introducing essential non-local connections efficiently. The new model is not only analytically solvable but also demonstrates efficient and accurate performance when solved using machine learning techniques. Then we generalize our the model from $Z_2$ to $Z_n$ toric code and discuss future directions.
Related papers
- Noise to the Rescue: Escaping Local Minima in Neurosymbolic Local Search [50.24983453990065]
We show that applying BP to Godel logic, which represents conjunction and disjunction as min and max, is equivalent to a local search algorithm for SAT solving.
We propose the Godel Trick, which adds noise to the model's logits to escape local optima.
arXiv Detail & Related papers (2025-03-03T18:42:13Z) - A Pseudo-Semantic Loss for Autoregressive Models with Logical
Constraints [87.08677547257733]
Neuro-symbolic AI bridges the gap between purely symbolic and neural approaches to learning.
We show how to maximize the likelihood of a symbolic constraint w.r.t the neural network's output distribution.
We also evaluate our approach on Sudoku and shortest-path prediction cast as autoregressive generation.
arXiv Detail & Related papers (2023-12-06T20:58:07Z) - Universal representation by Boltzmann machines with Regularised Axons [34.337412054122076]
We show that regularised Boltzmann machines preserve the ability to represent arbitrary distributions.
We also show that regularised Boltzmann machines can store exponentially many arbitrarily correlated visible patterns with perfect retrieval.
arXiv Detail & Related papers (2023-10-22T20:05:47Z) - Inferring effective couplings with Restricted Boltzmann Machines [3.150368120416908]
Generative models attempt to encode correlations observed in the data at the level of the Boltzmann weight associated with an energy function in the form of a neural network.
We propose a solution by implementing a direct mapping between the Restricted Boltzmann Machine and an effective Ising spin Hamiltonian.
arXiv Detail & Related papers (2023-09-05T14:55:09Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Guaranteed Conformance of Neurosymbolic Models to Natural Constraints [4.598757178874836]
In safety-critical applications, it is important that the data-driven model is conformant to established knowledge from the natural sciences.
We propose a method to guarantee this conformance.
We experimentally show that our constrained neurosymbolic models conform to specified models.
arXiv Detail & Related papers (2022-12-02T18:03:37Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Assessment of machine learning methods for state-to-state approaches [0.0]
We investigate the possibilities offered by the use of machine learning methods for state-to-state approaches.
Deep neural networks appear to be a viable technology also for these tasks.
arXiv Detail & Related papers (2021-04-02T13:27:23Z) - From Boltzmann Machines to Neural Networks and Back Again [31.613544605376624]
We give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models.
Our results are based on new connections to learning two-layer neural networks under $ell_infty$ bounded input.
We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions.
arXiv Detail & Related papers (2020-07-25T00:42:50Z) - OccamNet: A Fast Neural Model for Symbolic Regression at Scale [11.463756755780583]
OccamNet is a neural network model that finds interpretable, compact, and sparse symbolic fits to data.
Our model defines a probability distribution over functions with efficient sampling and function evaluation.
It can identify symbolic fits for a variety of problems, including analytic and non-analytic functions, implicit functions, and simple image classification.
arXiv Detail & Related papers (2020-07-16T21:14:45Z) - Generative Semantic Hashing Enhanced via Boltzmann Machines [61.688380278649056]
Existing generative-hashing methods mostly assume a factorized form for the posterior distribution.
We propose to employ the distribution of Boltzmann machine as the retrievalal posterior.
We show that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.
arXiv Detail & Related papers (2020-06-16T01:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.