Generative Deep Learning for the Two-Dimensional Quantum Rotor Model
- URL: http://arxiv.org/abs/2602.20772v1
- Date: Tue, 24 Feb 2026 11:06:16 GMT
- Title: Generative Deep Learning for the Two-Dimensional Quantum Rotor Model
- Authors: Yanyang Wang, Feng Gao, Kui Tuo, Wei Li,
- Abstract summary: In this work, we design two models based on the foundational architecture of generative adversarial networks (GANs)<n>Within a semi-supervised learning framework, we incorporate multiple layers of transposed convolutions in the generator.<n>Analysis of one-dimensional latent variables associated with ground-state samples for different system sizes allows us to pinpoint the location of the critical point.
- Score: 7.545403823716431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancement of diverse generative deep learning models and their variants has furnished substantial insights for investigating quantum many-body problems. In this work, we design two models based on the foundational architecture of generative adversarial networks (GANs) to investigate the ground-state properties and phase transition characteristics of the two-dimensional quantum rotor model (QRM). Within a semi-supervised learning framework, we incorporate multiple layers of transposed convolutions in the generator, enabling the conditional GAN to more efficiently extract low-dimensional encoded information. Analysis of one-dimensional latent variables associated with ground-state samples for different system sizes allows us to pinpoint the location of the critical point. In addition, we introduce dynamically adaptive weighting factors related to the distributional characteristics into the loss function of the deep convolutional GAN, and utilize upsampling techniques to enlarge the generated sample sizes. Comparisons of the optimization processes for mean magnetization and potential energy density across different magnetization regimes of QRM demonstrate that our model can efficiently generate valid ground-state samples, significantly reducing computational time. Our results highlight the promising potential of generative deep learning in quantum phase transition research, especially in critical point identification and the auxiliary generation of simulation data for quantum many-body models.
Related papers
- Quantum Phase Transitions in the Transverse-Field Ising Model: A Comparative Study of Exact, Variational, and Hardware-Based Approaches [0.0]
This paper will study the ground state properties and quantum critical dynamics of the one-dimensional transverse field Ising model.<n>We focus on a lattice of four spins, where we calculate the ground-state energies, magnetic order parameters and correlation functions.<n>We find that the ground-state energies of shallow variational circuits are reliably captured by the circuit over the entire parameter space.
arXiv Detail & Related papers (2026-01-24T16:26:15Z) - Multi-resolution Physics-Aware Recurrent Convolutional Neural Network for Complex Flows [2.7233737247962786]
MRPARCv2 is designed to model complex flows by embedding the structure of advection-diffusion-reaction equations.<n>We evaluate the model on a challenging 2D turbulent radiative layer dataset from The Well multi-physics benchmark repository.
arXiv Detail & Related papers (2025-12-04T16:19:10Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Compact Multi-Threshold Quantum Information Driven Ansatz For Strongly Interactive Lattice Spin Models [0.0]
We introduce a systematic procedure for ansatz building based on approximate Quantum Mutual Information (QMI)
Our approach generates a layered-structured ansatz, where each layer's qubit pairs are selected based on their QMI values, resulting in more efficient state preparation and optimization routines.
Our results show that the Multi-QIDA method reduces the computational complexity while maintaining high precision, making it a promising tool for quantum simulations in lattice spin models.
arXiv Detail & Related papers (2024-08-05T17:07:08Z) - Using a Feedback-Based Quantum Algorithm to Analyze the Critical Properties of the ANNNI Model Without Classical Optimization [0.0]
We investigate the critical properties of the Anisotropic Next-Nearest-Neighbor Ising (ANNNI) model using a feedback-based quantum algorithm (FQA)<n>By exploiting symmetries in the algorithm, we show how targeted initial states can increase convergence and facilitate the study of excited states.<n>Our findings highlight FQA's potential as a versatile tool for studying quantum systems, providing insights into quantum phase transitions and the magnetic properties of complex spin models.
arXiv Detail & Related papers (2024-06-25T20:58:03Z) - Exploring quantum localization with machine learning [39.58317527488534]
We introduce an efficient neural network (NN) architecture for classifying wave functions in terms of their localization.
Our approach integrates a versatile quantum phase space parametrization leading to a custom 'quantum' NN, with the pattern recognition capabilities of a modified convolutional model.
arXiv Detail & Related papers (2024-06-01T08:50:26Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A performance characterization of quantum generative models [35.974070202997176]
We compare quantum circuits used for quantum generative modeling.
We learn the underlying probability distribution of the data sets via two popular training methods.
We empirically find that a variant of the discrete architecture, which learns the copula of the probability distribution, outperforms all other methods.
arXiv Detail & Related papers (2023-01-23T11:00:29Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Variational learning of quantum ground states on spiking neuromorphic
hardware [0.0]
High-dimensional sampling spaces and transient autocorrelations confront neural networks with a challenging computational bottleneck.
Compared to conventional neural networks, physical-model devices offer a fast, efficient and inherently parallel substrate.
We demonstrate the ability of a neuromorphic chip to represent the ground states of quantum spin models by variational energy minimization.
arXiv Detail & Related papers (2021-09-30T14:39:45Z) - Flow-based Generative Models for Learning Manifold to Manifold Mappings [39.60406116984869]
We introduce three kinds of invertible layers for manifold-valued data, which are analogous to their functionality in flow-based generative models.
We show promising results where we can reliably and accurately reconstruct brain images of a field of orientation distribution functions.
arXiv Detail & Related papers (2020-12-18T02:19:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.