Open-Source Fermionic Neural Networks with Ionic Charge Initialization
- URL: http://arxiv.org/abs/2401.10287v1
- Date: Tue, 16 Jan 2024 08:51:58 GMT
- Title: Open-Source Fermionic Neural Networks with Ionic Charge Initialization
- Authors: Shai Pranesh, Shang Zhu, Venkat Viswanathan, Bharath Ramsundar
- Abstract summary: We integrate the FermiNet model into a standard and widely used open source library, DeepChem.
We propose novel techniques to overcome the difficulties associated with the assignment of excess or lack of electrons for ions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Finding accurate solutions to the electronic Schr\"odinger equation plays an
important role in discovering important molecular and material energies and
characteristics. Consequently, solving systems with large numbers of electrons
has become increasingly important. Variational Monte Carlo (VMC) methods,
especially those approximated through deep neural networks, are promising in
this regard. In this paper, we aim to integrate one such model called the
FermiNet, a post-Hartree-Fock (HF) Deep Neural Network (DNN) model, into a
standard and widely used open source library, DeepChem. We also propose novel
initialization techniques to overcome the difficulties associated with the
assignment of excess or lack of electrons for ions.
Related papers
- Novel Kernel Models and Exact Representor Theory for Neural Networks Beyond the Over-Parameterized Regime [52.00917519626559]
This paper presents two models of neural-networks and their training applicable to neural networks of arbitrary width, depth and topology.
We also present an exact novel representor theory for layer-wise neural network training with unregularized gradient descent in terms of a local-extrinsic neural kernel (LeNK)
This representor theory gives insight into the role of higher-order statistics in neural network training and the effect of kernel evolution in neural-network kernel models.
arXiv Detail & Related papers (2024-05-24T06:30:36Z) - Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations [58.130170155147205]
Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost.
Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently.
This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules.
arXiv Detail & Related papers (2024-05-23T16:30:51Z) - CHGNet: Pretrained universal neural network potential for
charge-informed atomistic modeling [0.6860131654491484]
We present the Crystal Hamiltonian Graph neural Network (CHGNet) as a novel machine-learning interatomic potential (MLIP)
CHGNet is pretrained on the energies, forces, stresses, and magnetic moments from the Materials Project Trajectory dataset.
We provide new insights into ionic systems with additional electronic degrees of freedom that can not be observed by previous MLIPs.
arXiv Detail & Related papers (2023-02-28T01:30:06Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - A Self-Attention Ansatz for Ab-initio Quantum Chemistry [3.4161707164978137]
We present a novel neural network architecture using self-attention, the Wavefunction Transformer (Psiformer)
We show that the Psiformer can be used as a drop-in replacement for other neural networks, often dramatically improving the accuracy of the calculations.
This demonstrates that self-attention networks can learn complex quantum mechanical correlations between electrons, and are a promising route to reaching unprecedented accuracy in chemical calculations on larger systems.
arXiv Detail & Related papers (2022-11-24T15:38:55Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Wave function Ansatz (but Periodic) Networks and the Homogeneous
Electron Gas [1.7944372791281356]
We design a neural network Ansatz for variationally finding the ground-state wave function of the Homogeneous Electron Gas.
We study the spin-polarised and paramagnetic phases with 7, 14 and 19 electrons over a broad range of densities.
This contribution establishes neural network models as flexible and high precision Ans"atze for periodic electronic systems.
arXiv Detail & Related papers (2022-02-02T14:12:49Z) - Autoregressive neural-network wavefunctions for ab initio quantum
chemistry [3.5987961950527287]
We parameterise the electronic wavefunction with a novel autoregressive neural network (ARN)
This allows us to perform electronic structure calculations on molecules with up to 30 spin-orbitals.
arXiv Detail & Related papers (2021-09-26T13:44:41Z) - Solving the electronic Schr\"odinger equation for multiple nuclear
geometries with weight-sharing deep neural networks [4.1201966507589995]
We introduce a weight-sharing constraint when optimizing neural network-based models for different molecular geometries.
We find that this technique can accelerate optimization when considering sets of nuclear geometries of the same molecule by an order of magnitude.
arXiv Detail & Related papers (2021-05-18T08:23:09Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Better, Faster Fermionic Neural Networks [68.61120920231944]
We present several improvements to the FermiNet that allow us to set new records for speed and accuracy on challenging systems.
We find that increasing the size of the network is sufficient to reach chemical accuracy on atoms as large as argon.
This enables us to run the FermiNet on the challenging transition of bicyclobutane to butadiene and compare against the PauliNet on the automerization of cyclobutadiene.
arXiv Detail & Related papers (2020-11-13T20:55:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.