Determinant-free fermionic wave function using feed-forward neural
networks
- URL: http://arxiv.org/abs/2108.08631v2
- Date: Sun, 22 Aug 2021 04:44:25 GMT
- Title: Determinant-free fermionic wave function using feed-forward neural
networks
- Authors: Koji Inui, Yasuyuki Kato and Yukitoshi Motome
- Abstract summary: We propose a framework for finding the ground state of many-body fermionic systems by using feed-forward neural networks.
We show that the accuracy of the approximation can be improved by optimizing the "variance" of the energy simultaneously with the energy itself.
These improvements can be applied to other approaches based on variational Monte Carlo methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a general framework for finding the ground state of many-body
fermionic systems by using feed-forward neural networks. The anticommutation
relation for fermions is usually implemented to a variational wave function by
the Slater determinant (or Pfaffian), which is a computational bottleneck
because of the numerical cost of $O(N^3)$ for $N$ particles. We bypass this
bottleneck by explicitly calculating the sign changes associated with particle
exchanges in real space and using fully connected neural networks for
optimizing the rest parts of the wave function. This reduces the computational
cost to $O(N^2)$ or less. We show that the accuracy of the approximation can be
improved by optimizing the "variance" of the energy simultaneously with the
energy itself. We also find that a reweighting method in Monte Carlo sampling
can stabilize the calculation. These improvements can be applied to other
approaches based on variational Monte Carlo methods. Moreover, we show that the
accuracy can be further improved by using the symmetry of the system, the
representative states, and an additional neural network implementing a
generalized Gutzwiller-Jastrow factor. We demonstrate the efficiency of the
method by applying it to a two-dimensional Hubbard model.
Related papers
- Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations [58.130170155147205]
Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost.
Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently.
This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules.
arXiv Detail & Related papers (2024-05-23T16:30:51Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Neural Wave Functions for Superfluids [3.440236962613469]
We study the unitary Fermi gas, a system with strong, short-range, two-body interactions known to possess a superfluid ground state.
We use the recently developed Fermionic neural network (FermiNet) wave function Ansatz for variational Monte Carlo calculations.
arXiv Detail & Related papers (2023-05-11T17:23:29Z) - $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks [107.86545461433616]
We propose permutation-equivariant architectures, on which a determinant Slater is applied to induce antisymmetry.
FermiNet is proved to have universal approximation capability with a single determinant, namely, it suffices to represent any antisymmetric function.
We substitute the Slater with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N2)$.
arXiv Detail & Related papers (2022-05-26T07:44:54Z) - Simulations of state-of-the-art fermionic neural network wave functions
with diffusion Monte Carlo [2.039924457892648]
We introduce several modifications to the network (Fermi Net) and optimization method (Kronecker Factored Approximate Curvature)
The Diffusion Monte Carlo results exceed or match state-of-the-art performance for all systems investigated.
arXiv Detail & Related papers (2021-03-23T14:12:39Z) - Deep FPF: Gain function approximation in high-dimensional setting [8.164433158925592]
We present a novel approach to approximate the gain function of the feedback particle filter (FPF)
The numerical problem is to approximate the exact gain function using only finitely many particles sampled from the probability distribution.
Inspired by the recent success of the deep learning methods, we represent the gain function as a gradient of the output of a neural network.
arXiv Detail & Related papers (2020-10-02T20:17:21Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.