Neural Importance Resampling: A Practical Sampling Strategy for Neural Quantum States
- URL: http://arxiv.org/abs/2507.20510v1
- Date: Mon, 28 Jul 2025 04:16:17 GMT
- Title: Neural Importance Resampling: A Practical Sampling Strategy for Neural Quantum States
- Authors: Eimantas Ledinauskas, Egidijus Anisimovas,
- Abstract summary: We introduce Neural Importance Resampling (NIR), a new sampling algorithm that combines importance resampling with a separately trained autoregressive proposal network.<n>We demonstrate that NIR supports stable and scalable training, including for multi-state NQS, and mitigates issues faced by MCMC and autoregressive approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural quantum states (NQS) have emerged as powerful tools for simulating many-body quantum systems, but their practical use is often hindered by limitations of current sampling techniques. Markov chain Monte Carlo (MCMC) methods suffer from slow mixing and require manual tuning, while autoregressive NQS impose restrictive architectural constraints that complicate the enforcement of symmetries and the construction of determinant-based multi-state wave functions. In this work, we introduce Neural Importance Resampling (NIR), a new sampling algorithm that combines importance resampling with a separately trained autoregressive proposal network. This approach enables efficient and unbiased sampling without constraining the NQS architecture. We demonstrate that NIR supports stable and scalable training, including for multi-state NQS, and mitigates issues faced by MCMC and autoregressive approaches. Numerical experiments on the 2D transverse-field Ising model show that NIR outperforms MCMC in challenging regimes and yields results competitive with density matrix renormalization group (DMRG) methods. Our results establish NIR as a robust alternative for sampling in variational NQS algorithms.
Related papers
- TensoMeta-VQC: A Tensor-Train-Guided Meta-Learning Framework for Robust and Scalable Variational Quantum Computing [60.996803677584424]
TensoMeta-VQC is a novel tensor-train (TT)-guided meta-learning framework designed to improve the robustness and scalability of VQC significantly.<n>Our framework fully delegates the generation of quantum circuit parameters to a classical TT network, effectively decoupling optimization from quantum hardware.
arXiv Detail & Related papers (2025-08-01T23:37:55Z) - Microcanonical Langevin Ensembles: Advancing the Sampling of Bayesian Neural Networks [4.8767011596635275]
We introduce an ensembling approach that leverages strategies from optimization and a recently proposed sampler for efficient, robust and predictable sampling performance.<n>Compared to approaches based on the state-of-the-art No-U-Turn sampler, our approach delivers substantial speedups up to an order of magnitude.
arXiv Detail & Related papers (2025-02-10T10:36:42Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Simulating Non-Markovian Open Quantum Dynamics with Neural Quantum States [9.775774445091516]
We encode environmental memory in dissipatons, yielding the dissipaton-embedded quantum master equation (DQME)<n>The resulting NQS-DQME framework achieves compact representation of many-body correlations and non-Markovian memory.<n>This methodology opens new paths to explore non-Markovian open quantum dynamics in previously intractable systems.
arXiv Detail & Related papers (2024-04-17T06:17:08Z) - High Accuracy Uncertainty-Aware Interatomic Force Modeling with
Equivariant Bayesian Neural Networks [3.028098724882708]
We introduce a new Monte Carlo Markov chain sampling algorithm for learning interatomic forces.
In addition, we introduce a new neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty.
arXiv Detail & Related papers (2023-04-05T10:39:38Z) - Gradient-descent hardware-aware training and deployment for mixed-signal
Neuromorphic processors [2.812395851874055]
Mixed-signal neuromorphic processors provide extremely low-power operation for edge inference workloads.
We demonstrate a novel methodology for ofDine training and deployment of spiking neural networks (SNNs) to the mixed-signal neuromorphic processor DYNAP-SE2.
arXiv Detail & Related papers (2023-03-14T08:56:54Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Decomposition of Matrix Product States into Shallow Quantum Circuits [62.5210028594015]
tensor network (TN) algorithms can be mapped to parametrized quantum circuits (PQCs)
We propose a new protocol for approximating TN states using realistic quantum circuits.
Our results reveal one particular protocol, involving sequential growth and optimization of the quantum circuit, to outperform all other methods.
arXiv Detail & Related papers (2022-09-01T17:08:41Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Learning Neural Network Quantum States with the Linear Method [0.0]
We show that the linear method can be used successfully for the optimization of complex valued neural network quantum states.
We compare the LM to the state-of-the-art SR algorithm and find that the LM requires up to an order of magnitude fewer iterations for convergence.
arXiv Detail & Related papers (2021-04-22T12:18:33Z) - Sampling asymmetric open quantum systems for artificial neural networks [77.34726150561087]
We present a hybrid sampling strategy which takes asymmetric properties explicitly into account, achieving fast convergence times and high scalability for asymmetric open systems.
We highlight the universal applicability of artificial neural networks, underlining the universal applicability of neural networks.
arXiv Detail & Related papers (2020-12-20T18:25:29Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.