Machine Learning the Strong Disorder Renormalization Group Method for Disordered Quantum Spin Chains
- URL: http://arxiv.org/abs/2603.05164v1
- Date: Thu, 05 Mar 2026 13:35:00 GMT
- Title: Machine Learning the Strong Disorder Renormalization Group Method for Disordered Quantum Spin Chains
- Authors: A. Ustyuzhanin, J. Vahedi, S. Kettemann,
- Abstract summary: We train machine learning algorithms to infer the entanglement structure of disordered long-range interacting quantum spin chains.<n>We compare a Random Forest as a classical baseline with a graph neural network (GNN) that operates directly on the interaction graph.<n>The GNN achieves a disorder-averaged pairing accuracy close to one and reproduces the entanglement entropy $S(ell)$ in excellent quantitative agreement with SDRG.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We train machine learning algorithms to infer the entanglement structure of disordered long-range interacting quantum spin chains by learning from the strong disorder renormalisation group (SDRG) method. The system consists of $S=1/2$-quantum spins coupled by antiferromagnetic power-law interactions with decay exponent $α$ at random positions on a one-dimensional chain. Using SDRG as a physics-informed teacher, we compare a Random Forest classifier as a classical baseline with a graph neural network (GNN) that operates directly on the interaction graph and learns a bond-ranking rule mirroring the SDRG decimation policy. The GNN achieves a disorder-averaged pairing accuracy close to one and reproduces the entanglement entropy $S(\ell)$ in excellent quantitative agreement with SDRG across all subsystem sizes and interaction exponents. RG flow heat maps confirm that the GNN learns the sequential decimation hierarchy rather than merely fitting final-state observables. Finite-temperature entanglement properties are incorporated via the SDRGX framework through a two-stage strategy, using the zero-temperature GNN to generate the RG flow and sampling thermal occupations from the canonical ensemble, yielding results in agreement with both numerical SDRGX and analytical predictions without retraining.
Related papers
- Automated discovery of finite volume schemes using Graph Neural Networks [2.867517731896504]
We establish that Graph Neural Networks (GNNs) can serve purposes beyond their traditional role.<n>We show that a GNN trained on a dataset consisting solely of two-node graphs can extrapolate a first-order Finite Volume scheme.<n>Using symbolic regression, we show that the network effectively rediscovers the exact analytical formulation of the standard first-order FV scheme.
arXiv Detail & Related papers (2025-08-26T14:08:46Z) - Sharp Generalization for Nonparametric Regression in Interpolation Space by Over-Parameterized Neural Networks Trained with Preconditioned Gradient Descent and Early Stopping [15.975065054204753]
We study non regression using an over-parametricized two-layer neural networks trained with algorithmic guarantees.<n>We demonstrate that training the neural network with a novel Preconditioned Gradient Descent (PGD) algorithm, equipped with early stopping, achieves a sharp regression rate.
arXiv Detail & Related papers (2024-07-16T03:38:34Z) - E(2) Equivariant Neural Networks for Robust Galaxy Morphology
Classification [0.0]
We train, validate, and test GCNNs equivariant to discrete subgroups of $E(2)$ on the Galaxy10 DECals dataset.
An architecture equivariant to the group $D_16$ achieves a $95.52 pm 0.18%$ test-set accuracy.
All GCNNs are less susceptible to one-pixel perturbations than an identically constructed CNN.
arXiv Detail & Related papers (2023-11-02T18:00:02Z) - Study of the long-range transverse field Ising model with fermionic
Gaussian states [0.0]
We numerically study the one-dimensional long-range Transverse Field Ising Model (TFIM) in the antiferromagnetic regime at zero temperature.
The spin-spin interaction extends to all spins in the lattice and decays as $1/ralpha$, where $r$ denotes the distance between two spins and $alpha$ is a tunable exponent.
arXiv Detail & Related papers (2023-01-07T21:23:53Z) - Mixed Graph Contrastive Network for Semi-Supervised Node Classification [63.924129159538076]
We propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN)<n>In our method, we improve the discriminative capability of the latent embeddings by an unperturbed augmentation strategy and a correlation reduction mechanism.<n>By combining the two settings, we extract rich supervision information from both the abundant nodes and the rare yet valuable labeled nodes for discriminative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Decentralized Stochastic Proximal Gradient Descent with Variance
Reduction over Time-varying Networks [30.231314171218994]
In decentralized learning, a network of nodes cooperate to minimize an overall objective function that is usually the finite-sum of their local objectives.
We propose a novel algorithm, namely DPSVRG, to accelerate the decentralized training by leveraging the variance reduction technique.
arXiv Detail & Related papers (2021-12-20T08:23:36Z) - Benchmarking discrete truncated Wigner approximation and neural network
quantum states with the exact dynamics in a Rydberg atomic chain [0.4886997638319856]
We benchmark the discrete truncated Wigner approximation (DTWA) and Neural quantum states (NQS) based on restricted Boltzmann-like machines.
We characterize the excitation dynamics using the maximum and time-averaged number of Rydberg excitations.
arXiv Detail & Related papers (2021-10-05T17:48:05Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.