Generating Generalised Ground-State Ansatzes from Few-Body Examples
- URL: http://arxiv.org/abs/2503.00497v2
- Date: Thu, 24 Apr 2025 11:56:14 GMT
- Title: Generating Generalised Ground-State Ansatzes from Few-Body Examples
- Authors: Matt Lourens, Ilya Sinayskiy, Johannes N. Kriel, Francesco Petruccione,
- Abstract summary: We introduce a method that generates ground-state ansatzes for quantum many-body systems.<n>The ansatzes are analytically tractable and accurate over wide parameter regimes.<n>We demonstrate this method on the Lipkin-Meshkov-Glick model (LMG) and the quantum transverse-field Ising model (TFIM)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method that generates ground-state ansatzes for quantum many-body systems which are both analytically tractable and accurate over wide parameter regimes. Our approach leverages a custom symbolic language to construct tensor network states (TNS) via an evolutionary algorithm. This language provides operations that allow the generated TNS to automatically scale with system size. Consequently, we can evaluate ansatz fitness for small systems, which is computationally efficient, while favouring structures that continue to perform well with increasing system size. This ensures that the ansatz captures robust features of the ground state structure. Remarkably, we find analytically tractable ansatzes with a degree of universality, which encode correlations, capture finite-size effects, accurately predict ground-state energies, and offer a good description of critical phenomena. We demonstrate this method on the Lipkin-Meshkov-Glick model (LMG) and the quantum transverse-field Ising model (TFIM), where the same ansatz was independently generated for both. The simple structure of the ansatz allows us to restore broken symmetries and obtain exact expressions for the expectation values of local observables and correlation functions.
Related papers
- Spectral Normalization and Voigt-Reuss net: A universal approach to microstructure-property forecasting with physical guarantees [0.0]
A crucial step in the design process is the rapid evaluation of effective mechanical, thermal, or, in general, elasticity properties.
The classical simulation-based approach, which uses, e.g., finite elements and FFT-based solvers, can require substantial computational resources.
We propose a novel spectral normalization scheme that a priori enforces these bounds.
arXiv Detail & Related papers (2025-04-01T12:21:57Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.
Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.
Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Geometric Neural Process Fields [58.77241763774756]
Geometric Neural Process Fields (G-NPF) is a probabilistic framework for neural radiance fields that explicitly captures uncertainty.<n>Building on these bases, we design a hierarchical latent variable model, allowing G-NPF to integrate structural information across multiple spatial levels.<n> Experiments on novel-view synthesis for 3D scenes, as well as 2D image and 1D signal regression, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2025-02-04T14:17:18Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Message-Passing Neural Quantum States for the Homogeneous Electron Gas [41.94295877935867]
We introduce a message-passing-neural-network-based wave function Ansatz to simulate extended, strongly interacting fermions in continuous space.
We demonstrate its accuracy by simulating the ground state of the homogeneous electron gas in three spatial dimensions.
arXiv Detail & Related papers (2023-05-12T04:12:04Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A framework for efficient ab initio electronic structure with Gaussian
Process States [0.0]
We present a framework for the efficient simulation of realistic fermionic systems with modern machine learning inspired representations of quantum many-body states.
We show competitive accuracy for systems with up to 64 electrons, including a simplified (yet fully ab initio) model of the Mott transition in three-dimensional hydrogen.
arXiv Detail & Related papers (2023-02-02T13:40:38Z) - Positive-definite parametrization of mixed quantum states with deep
neural networks [0.0]
We show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution.
We benchmark this architecture by the steady state of the dissipative transverse-field Ising model.
arXiv Detail & Related papers (2022-06-27T17:51:38Z) - Orthogonal Stochastic Configuration Networks with Adaptive Construction
Parameter for Data Analytics [6.940097162264939]
randomness makes SCNs more likely to generate approximate linear correlative nodes that are redundant and low quality.
In light of a fundamental principle in machine learning, that is, a model with fewer parameters holds improved generalization.
This paper proposes orthogonal SCN, termed OSCN, to filtrate out the low-quality hidden nodes for network structure reduction.
arXiv Detail & Related papers (2022-05-26T07:07:26Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation [5.668795025564699]
We present an approach for tackling open quantum system dynamics.
We compactly represent quantum states with autoregressive transformer neural networks.
Efficient algorithms have been developed to simulate the dynamics of the Liouvillian superoperator.
arXiv Detail & Related papers (2020-09-11T18:00:00Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Generalising Recursive Neural Models by Tensor Decomposition [12.069862650316262]
We introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation.
We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker decomposition.
By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation.
arXiv Detail & Related papers (2020-06-17T17:28:19Z) - Gaussian Process States: A data-driven representation of quantum
many-body physics [59.7232780552418]
We present a novel, non-parametric form for compactly representing entangled many-body quantum states.
The state is found to be highly compact, systematically improvable and efficient to sample.
It is also proven to be a universal approximator' for quantum states, able to capture any entangled many-body state with increasing data set size.
arXiv Detail & Related papers (2020-02-27T15:54:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.