A Size-Consistent Wave-function Ansatz Built from Statistical Analysis
of Orbital Occupations
- URL: http://arxiv.org/abs/2304.10484v1
- Date: Thu, 20 Apr 2023 17:30:06 GMT
- Title: A Size-Consistent Wave-function Ansatz Built from Statistical Analysis
of Orbital Occupations
- Authors: Valerii Chuiko, Paul W. Ayers
- Abstract summary: We present a fresh approach to wavefunction parametrization that is size-consistent, rapidly convergent, and numerically robust.
The general utility of this approach is verified by applying it to uncorrelated, weakly-correlated, and strongly-correlated systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Direct approaches to the quantum many-body problem suffer from the so-called
"curse of dimensionality": the number of parameters needed to fully specify the
exact wavefunction grows exponentially with increasing system size. This
motivates the develop of accurate, but approximate, ways to parametrize the
wavefunction, including methods like couple cluster theory and correlator
product states (CPS). Recently, there has been interest in approaches based on
machine learning both direct applications of neural network architecture and
the combinations of conventional wavefunction parametrizations with various
Boltzmann machines. While all these methods can be exact in principle, they are
usually applied with only a polynomial number of parameters, limiting their
applicability. This research's objective is to present a fresh approach to
wavefunction parametrization that is size-consistent, rapidly convergent, and
robust numerically. Specifically, we propose a hierarchical ansatz that
converges rapidly (with respect to the number of least-squares optimization).
The general utility of this approach is verified by applying it to
uncorrelated, weakly-correlated, and strongly-correlated systems, including
small molecules and the one-dimensional Hubbard model.
Related papers
- Efficiency of the hidden fermion determinant states Ansatz in the light of different complexity measures [0.0]
Ans"atze utilizes the expressivity of neural networks to tackle fundamentally challenging problems.
We study five different fermionic models displaying volume law scaling of the entanglement entropy.
We provide evidence that whenever one of the measures indicates proximity to a parameter region in which a conventional approach would work reliable, the neural network approach also works reliable and efficient.
arXiv Detail & Related papers (2024-11-07T08:36:37Z) - Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Projective Quantum Eigensolver via Adiabatically Decoupled Subsystem Evolution: a Resource Efficient Approach to Molecular Energetics in Noisy Quantum Computers [0.0]
We develop a projective formalism that aims to compute ground-state energies of molecular systems accurately using Noisy Intermediate Scale Quantum (NISQ) hardware.
We demonstrate the method's superior performance under noise while concurrently ensuring requisite accuracy in future fault-tolerant systems.
arXiv Detail & Related papers (2024-03-13T13:27:40Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Accurate methods for the analysis of strong-drive effects in parametric
gates [94.70553167084388]
We show how to efficiently extract gate parameters using exact numerics and a perturbative analytical approach.
We identify optimal regimes of operation for different types of gates including $i$SWAP, controlled-Z, and CNOT.
arXiv Detail & Related papers (2021-07-06T02:02:54Z) - Circuit quantum electrodynamics (cQED) with modular quasi-lumped models [0.23624125155742057]
Method partitions a quantum device into compact lumped or quasi-distributed cells.
We experimentally validate the method on large-scale, state-of-the-art superconducting quantum processors.
arXiv Detail & Related papers (2021-03-18T16:03:37Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.