Repesentation of general spin-$S$ systems using a Restricted Boltzmann
Machine with Softmax Regression
- URL: http://arxiv.org/abs/2109.10651v2
- Date: Mon, 24 Apr 2023 15:26:19 GMT
- Title: Repesentation of general spin-$S$ systems using a Restricted Boltzmann
Machine with Softmax Regression
- Authors: Abhiroop Lahiri, Shazia Janwari and Swapan K Pati
- Abstract summary: We have shown that proposed SRBM technique performs very well and achieves the trial wave function, in a numerically more efficient way.
We evaluated the accuracy of our method by studying the spin-1/2 systems with softmax RBM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Here, we propose a novel method for representation of general spin systems
using Restricted Boltzmann Machine with Softmax Regression (SRBM) that follows
the probability distribution of the training data. SRBM training is performed
using stochastic reconfiguration method to find approximate representation of
many body wave functions. We have shown that proposed SRBM technique performs
very well and achieves the trial wave function, in a numerically more efficient
way, which is in good agreement with the theoretical prediction. We
demonstrated that the prediction of the trial wave function through SRBM
becomes more accurate as one increases the number of hidden units. We evaluated
the accuracy of our method by studying the spin-1/2 quantum systems with
softmax RBM which shows good accordance with the Exact Diagonalization(ED). We
have also compared the energies of spin chains of a few spin multiplicities($1,
3/2$ and $2$) with ED and DMRG results.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Generative Fractional Diffusion Models [53.36835573822926]
We introduce the first continuous-time score-based generative model that leverages fractional diffusion processes for its underlying dynamics.
Our evaluations on real image datasets demonstrate that GFDM achieves greater pixel-wise diversity and enhanced image quality, as indicated by a lower FID.
arXiv Detail & Related papers (2023-10-26T17:53:24Z) - Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo [104.9535542833054]
We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL)
We instead directly sample the Q function from its posterior distribution, by using Langevin Monte Carlo.
Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.
arXiv Detail & Related papers (2023-05-29T17:11:28Z) - A real neural network state for quantum chemistry [1.9363665969803923]
The Boltzmann machine (RBM) has been successfully applied to solve the many-electron Schr$ddottexto$dinger equation.
We propose a single-layer fully connected neural network adapted from RBM and apply it to study ab initio quantum chemistry problems.
arXiv Detail & Related papers (2023-01-10T02:21:40Z) - Provably Efficient Offline Reinforcement Learning with Trajectory-Wise
Reward [66.81579829897392]
We propose a novel offline reinforcement learning algorithm called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED)
PARTED decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value based on the learned proxy reward.
To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
arXiv Detail & Related papers (2022-06-13T19:11:22Z) - Neural Contextual Bandits via Reward-Biased Maximum Likelihood
Estimation [9.69596041242667]
Reward-biased maximum likelihood estimation (RBMLE) is a classic principle in the adaptive control literature for tackling explore-exploit trade-offs.
This paper studies the contextual bandit problem with general bounded reward functions and proposes NeuralRBMLE, which adapts the RBMLE principle by adding a bias term to the log-likelihood to enforce exploration.
We show that both algorithms achieve comparable or better empirical regrets than the state-of-the-art methods on real-world datasets with non-linear reward functions.
arXiv Detail & Related papers (2022-03-08T16:33:36Z) - Probabilistic Gradient Boosting Machines for Large-Scale Probabilistic
Regression [51.770998056563094]
Probabilistic Gradient Boosting Machines (PGBM) is a method to create probabilistic predictions with a single ensemble of decision trees.
We empirically demonstrate the advantages of PGBM compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-03T08:32:13Z) - Neural-network Quantum States for Spin-1 systems: spin-basis and
parameterization effects on compactness of representations [0.0]
Neural network quantum states (NQS) have been widely applied to spin-1/2 systems where they have proven to be highly effective.
We propose a more direct generalisation of RBMs for spin-1 that retains the key properties of the standard spin-1/2 RBM.
Further to this we investigate how the hidden unit complexity of NQS depend on the local single-spin basis used.
arXiv Detail & Related papers (2021-05-18T15:05:22Z) - Helping restricted Boltzmann machines with quantum-state representation
by restoring symmetry [0.0]
variational wave functions based on neural networks have been recognized as a powerful ansatz to represent quantum many-body states accurately.
We construct a variational wave function with one of the simplest neural networks, the restricted Boltzmann machine (RBM), and apply it to a fundamental quantum spin Hamiltonian.
We show that, with the help of the symmetry, the RBM wave function achieves state-of-the-art accuracy both in ground-state and excited-state calculations.
arXiv Detail & Related papers (2020-09-30T16:25:28Z) - Exact representations of many body interactions with RBM neural networks [77.34726150561087]
We exploit the representation power of RBMs to provide an exact decomposition of many-body contact interactions into one-body operators.
This construction generalizes the well known Hirsch's transform used for the Hubbard model to more complicated theories such as Pionless EFT in nuclear physics.
arXiv Detail & Related papers (2020-05-07T15:59:29Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.