From {\tt Ferminet} to PINN. Connections between neural network-based algorithms for high-dimensional Schrödinger Hamiltonian
- URL: http://arxiv.org/abs/2410.09177v2
- Date: Wed, 20 Nov 2024 16:54:12 GMT
- Title: From {\tt Ferminet} to PINN. Connections between neural network-based algorithms for high-dimensional Schrödinger Hamiltonian
- Authors: Mashhood Khan, Emmanuel Lorin,
- Abstract summary: In particular, we re-formulate a PINN algorithm as a it fitting problem with data corresponding to the solution to a standard Monte Carlo algorithm.
Connections at the level of the optimization algorithms are also established.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this note, we establish some connections between standard (data-driven) neural network-based solvers for PDE and eigenvalue problems developed on one side in the applied mathematics and engineering communities (e.g. Deep-Ritz and Physics Informed Neural Networks (PINN)), and on the other side in quantum chemistry (e.g. Variational Monte Carlo algorithms, {\tt Ferminet} or {\tt Paulinet} following the pioneer work of {\it Carleo et. al}. In particular, we re-formulate a PINN algorithm as a {\it fitting} problem with data corresponding to the solution to a standard Diffusion Monte Carlo algorithm initialized thanks to neural network-based Variational Monte Carlo. Connections at the level of the optimization algorithms are also established.
Related papers
- Parallel-in-Time Solutions with Random Projection Neural Networks [0.07282584715927627]
This paper considers one of the fundamental parallel-in-time methods for the solution of ordinary differential equations, Parareal, and extends it by adopting a neural network as a coarse propagator.
We provide a theoretical analysis of the convergence properties of the proposed algorithm and show its effectiveness for several examples, including Lorenz and Burgers' equations.
arXiv Detail & Related papers (2024-08-19T07:32:41Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss
Function for Combinatorial Optimization using Reinforcement Learning [1.325953054381901]
We introduce a novel Monty Carlo Tree Search-based strategy with Graph Neural Network (GNN)
We identify a behavioral pattern related to PI-GNN and devise strategies to improve its performance.
We also focus on creating a bridge between the RL-based solutions and the QUBO-formulated Hamiltonian.
arXiv Detail & Related papers (2023-11-27T19:33:14Z) - Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - $\Delta$-PINNs: physics-informed neural networks on complex geometries [2.1485350418225244]
Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations.
To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved.
We propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator.
arXiv Detail & Related papers (2022-09-08T18:03:19Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - PhyGNNet: Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [12.385926494640932]
We propose PhyGNNet for solving partial differential equations on the basics of a graph neural network.
In particular, we divide the computing area into regular grids, define partial differential operators on the grids, then construct pde loss for the network to optimize to build PhyGNNet model.
arXiv Detail & Related papers (2022-08-07T13:33:34Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.