Explicitly antisymmetrized neural network layers for variational Monte
Carlo simulation
- URL: http://arxiv.org/abs/2112.03491v1
- Date: Tue, 7 Dec 2021 04:44:43 GMT
- Title: Explicitly antisymmetrized neural network layers for variational Monte
Carlo simulation
- Authors: Jeffmin Lin, Gil Goldshlager, Lin Lin
- Abstract summary: We introduce explicitly antisymmetrized universal neural network layers as a diagnostic tool.
We demonstrate that the resulting FermiNet-GA architecture can yield effectively the exact ground state energy for small systems.
Surprisingly, on the nitrogen molecule at a dissociating bond length of 4.0 Bohr, the full single-determinant FermiNet can significantly outperform the standard 64-determinant FermiNet.
- Score: 1.8965732681322227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The combination of neural networks and quantum Monte Carlo methods has arisen
as a path forward for highly accurate electronic structure calculations.
Previous proposals have combined equivariant neural network layers with an
antisymmetric layer to satisfy the antisymmetry requirements of the electronic
wavefunction. However, to date it is unclear if one can represent antisymmetric
functions of physical interest, and it is difficult to measure the
expressiveness of the antisymmetric layer. This work attempts to address this
problem by introducing explicitly antisymmetrized universal neural network
layers as a diagnostic tool. We first introduce a generic antisymmetric (GA)
layer, which we use to replace the entire antisymmetric layer of the highly
accurate ansatz known as the FermiNet. We demonstrate that the resulting
FermiNet-GA architecture can yield effectively the exact ground state energy
for small systems. We then consider a factorized antisymmetric (FA) layer which
more directly generalizes the FermiNet by replacing products of determinants
with products of antisymmetrized neural networks. Interestingly, the resulting
FermiNet-FA architecture does not outperform the FermiNet. This suggests that
the sum of products of antisymmetries is a key limiting aspect of the FermiNet
architecture. To explore this further, we investigate a slight modification of
the FermiNet called the full determinant mode, which replaces each product of
determinants with a single combined determinant. The full single-determinant
FermiNet closes a large part of the gap between the standard single-determinant
FermiNet and FermiNet-GA. Surprisingly, on the nitrogen molecule at a
dissociating bond length of 4.0 Bohr, the full single-determinant FermiNet can
significantly outperform the standard 64-determinant FermiNet, yielding an
energy within 0.4 kcal/mol of the best available computational benchmark.
Related papers
- Sorting Out Quantum Monte Carlo [15.0505667077874]
Molecular modeling at the quantum level requires choosing a parameterization of the wavefunction that both respects the required particle symmetries.
We introduce a new antisymmetrization layer derived from sorting, the $textitsortlet$, which scales as $O(N log N)$ with regards to the number of particles.
We show numerically that applying this anti-symmeterization layer on top of an attention based neural-network backbone yields a flexible wavefunction parameterization.
arXiv Detail & Related papers (2023-11-09T18:56:43Z) - Neural Wave Functions for Superfluids [3.440236962613469]
We study the unitary Fermi gas, a system with strong, short-range, two-body interactions known to possess a superfluid ground state.
We use the recently developed Fermionic neural network (FermiNet) wave function Ansatz for variational Monte Carlo calculations.
arXiv Detail & Related papers (2023-05-11T17:23:29Z) - Multi-Task Mixture Density Graph Neural Networks for Predicting Cu-based
Single-Atom Alloy Catalysts for CO2 Reduction Reaction [61.9212585617803]
Graph neural networks (GNNs) have drawn more and more attention from material scientists.
We develop a multi-task (MT) architecture based on DimeNet++ and mixture density networks to improve the performance of such task.
arXiv Detail & Related papers (2022-09-15T13:52:15Z) - SymNMF-Net for The Symmetric NMF Problem [62.44067422984995]
We propose a neural network called SymNMF-Net for the Symmetric NMF problem.
We show that the inference of each block corresponds to a single iteration of the optimization.
Empirical results on real-world datasets demonstrate the superiority of our SymNMF-Net.
arXiv Detail & Related papers (2022-05-26T08:17:39Z) - $O(N^2)$ Universal Antisymmetry in Fermionic Neural Networks [107.86545461433616]
We propose permutation-equivariant architectures, on which a determinant Slater is applied to induce antisymmetry.
FermiNet is proved to have universal approximation capability with a single determinant, namely, it suffices to represent any antisymmetric function.
We substitute the Slater with a pairwise antisymmetry construction, which is easy to implement and can reduce the computational cost to $O(N2)$.
arXiv Detail & Related papers (2022-05-26T07:44:54Z) - Improving the performance of fermionic neural networks with the Slater
exponential Ansatz [0.351124620232225]
We propose a technique for the use of fermionic neural networks (FermiNets) with the Slater exponential Ansatz for electron-nuclear and electron-electron distances.
arXiv Detail & Related papers (2022-02-21T11:15:42Z) - Discovering Quantum Phase Transitions with Fermionic Neural Networks [0.0]
Deep neural networks have been extremely successful as highly accurate wave function ans"atze for variational Monte Carlo calculations.
We present an extension of one such ansatz, FermiNet, to calculations of the ground states of periodic Hamiltonians.
arXiv Detail & Related papers (2022-02-10T17:32:17Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - Better, Faster Fermionic Neural Networks [68.61120920231944]
We present several improvements to the FermiNet that allow us to set new records for speed and accuracy on challenging systems.
We find that increasing the size of the network is sufficient to reach chemical accuracy on atoms as large as argon.
This enables us to run the FermiNet on the challenging transition of bicyclobutane to butadiene and compare against the PauliNet on the automerization of cyclobutadiene.
arXiv Detail & Related papers (2020-11-13T20:55:56Z) - Targeted free energy estimation via learned mappings [66.20146549150475]
Free energy perturbation (FEP) was proposed by Zwanzig more than six decades ago as a method to estimate free energy differences.
FEP suffers from a severe limitation: the requirement of sufficient overlap between distributions.
One strategy to mitigate this problem, called Targeted Free Energy Perturbation, uses a high-dimensional mapping in configuration space to increase overlap.
arXiv Detail & Related papers (2020-02-12T11:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.