Efficiency of the hidden fermion determinant states Ansatz in the light of different complexity measures
- URL: http://arxiv.org/abs/2411.04527v1
- Date: Thu, 07 Nov 2024 08:36:37 GMT
- Title: Efficiency of the hidden fermion determinant states Ansatz in the light of different complexity measures
- Authors: Björn J. Wurst, Dante M. Kennes, Jonas B. Profe,
- Abstract summary: Ans"atze utilizes the expressivity of neural networks to tackle fundamentally challenging problems.
We study five different fermionic models displaying volume law scaling of the entanglement entropy.
We provide evidence that whenever one of the measures indicates proximity to a parameter region in which a conventional approach would work reliable, the neural network approach also works reliable and efficient.
- Score: 0.0
- License:
- Abstract: Finding reliable approximations to the quantum many-body problem is one of the central challenges of modern physics. Elemental to this endeavor is the development of advanced numerical techniques pushing the limits of what is tractable. One such recently proposed numerical technique are neural quantum states. This new type of wavefunction based Ans\"atze utilizes the expressivity of neural networks to tackle fundamentally challenging problems, such as the Mott transition. In this paper we aim to gauge the universalness of one representative of neural network Ans\"atze, the hidden-fermion slater determinant approach. To this end, we study five different fermionic models each displaying volume law scaling of the entanglement entropy. For these, we correlate the effectiveness of the Ansatz with different complexity measures. Each measure indicates a different complexity in the absence of which a conventional Ansatz becomes efficient. We provide evidence that whenever one of the measures indicates proximity to a parameter region in which a conventional approach would work reliable, the neural network approach also works reliable and efficient. This highlights the great potential, but also challenges for neural network approaches: Finding suitable points in theory space around which to construct the Ansatz in order to be able to efficiently treat models unsuitable for their current designs.
Related papers
- An Unsupervised Deep Learning Approach for the Wave Equation Inverse
Problem [12.676629870617337]
Full-waveform inversion (FWI) is a powerful geophysical imaging technique that infers high-resolution subsurface physical parameters.
Due to limitations in observation, limited shots or receivers, and random noise, conventional inversion methods are confronted with numerous challenges.
We provide an unsupervised learning approach aimed at accurately reconstructing physical velocity parameters.
arXiv Detail & Related papers (2023-11-08T08:39:33Z) - Physics-Informed Neural Networks for an optimal counterdiabatic quantum
computation [32.73124984242397]
We introduce a novel methodology that leverages the strength of Physics-Informed Neural Networks (PINNs) to address the counterdiabatic (CD) protocol in the optimization of quantum circuits comprised of systems with $N_Q$ qubits.
The main applications of this methodology have been the $mathrmH_2$ and $mathrmLiH$ molecules, represented by a 2-qubit and 4-qubit systems employing the STO-3G basis.
arXiv Detail & Related papers (2023-09-08T16:55:39Z) - Solving the nuclear pairing model with neural network quantum states [58.720142291102135]
We present a variational Monte Carlo method that solves the nuclear many-body problem in the occupation number formalism.
A memory-efficient version of the reconfiguration algorithm is developed to train the network by minimizing the expectation value of the Hamiltonian.
arXiv Detail & Related papers (2022-11-09T00:18:01Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Correlation-Enhanced Neural Networks as Interpretable Variational
Quantum States [0.0]
Variational methods have proven to be excellent tools to approximate ground states of complex many body Hamiltonians.
We introduce a neural-network based variational ansatz that retains the flexibility of these generic methods while allowing for a tunability with respect to the relevant correlations governing the physics of the system.
arXiv Detail & Related papers (2021-03-08T19:01:12Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Learning the ground state of a non-stoquastic quantum Hamiltonian in a
rugged neural network landscape [0.0]
We investigate a class of universal variational wave-functions based on artificial neural networks.
In particular, we show that in the present setup the neural network expressivity and Monte Carlo sampling are not primary limiting factors.
arXiv Detail & Related papers (2020-11-23T05:25:47Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Reachability Analysis for Feed-Forward Neural Networks using Face
Lattices [10.838397735788245]
We propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
Our approach is capable of constructing the complete input set given an output set, so that any input that leads to safety violation can be tracked.
arXiv Detail & Related papers (2020-03-02T22:23:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.