Detecting Entanglement in High-Spin Quantum Systems via a Stacking Ensemble of Machine Learning Models
- URL: http://arxiv.org/abs/2507.12775v1
- Date: Thu, 17 Jul 2025 04:34:11 GMT
- Title: Detecting Entanglement in High-Spin Quantum Systems via a Stacking Ensemble of Machine Learning Models
- Authors: M. Y. Abd-Rabbou, Amr M. Abdallah, Ahmed A. Zahia, Ashraf A. Gouda, Cong-Feng Qiao,
- Abstract summary: This study examines the effectiveness of ensemble machine learning models as a reliable and scalable approach for estimating entanglement, measured by negativity, in quantum systems.<n>We construct an ensemble regressor integrating Neural Networks (NNs), XGBoost (XGB), and Extra Trees (ET)<n>The ensemble model with stacking meta-learner demonstrates robust performance by CatBoost (CB), accurately predicting negativity across different dimensionalities and state types.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable detection and quantification of quantum entanglement, particularly in high-spin or many-body systems, present significant computational challenges for traditional methods. This study examines the effectiveness of ensemble machine learning models as a reliable and scalable approach for estimating entanglement, measured by negativity, in quantum systems. We construct an ensemble regressor integrating Neural Networks (NNs), XGBoost (XGB), and Extra Trees (ET), trained on datasets of pure states and mixed Werner states for various spin dimensions. The ensemble model with stacking meta-learner demonstrates robust performance by CatBoost (CB), accurately predicting negativity across different dimensionalities and state types. Crucially, visual analysis of prediction scatter plots reveals that the ensemble model exhibits superior predictive consistency and lower deviation from true entanglement values compared to individual strong learners like NNs, even when aggregate metrics are comparable. This enhanced reliability, attributed to error cancellation and variance reduction inherent in ensembling, underscores the potential of this approach to bypass computational bottlenecks and provide a trustworthy tool for characterizing entanglement in high-dimensional quantum physics. An empirical formula for estimating data requirements based on system dimensionality and desired accuracy is also derived.
Related papers
- Beyond Calibration: Assessing the Probabilistic Fit of Neural Regressors via Conditional Congruence [2.2359781747539396]
Deep networks often suffer from overconfidence and misaligned predictive distributions.
We introduce a metric, Conditional Congruence Error (CCE), that uses conditional kernel mean embeddings to estimate the distance between the learned predictive distribution and the empirical, conditional distribution in a dataset.
We show that using to measure congruence 1) accurately quantifies misalignment between distributions when the data generating process is known, 2) effectively scales to real-world, high dimensional image regression tasks, and 3) can be used to gauge model reliability on unseen instances.
arXiv Detail & Related papers (2024-05-20T23:30:07Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - Single-model uncertainty quantification in neural network potentials
does not consistently outperform model ensembles [0.7499722271664145]
Neural networks (NNs) often assign high confidence to their predictions, even for points far out-of-distribution.
Uncertainty quantification (UQ) is a challenge when they are employed to model interatomic potentials in materials systems.
Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials.
arXiv Detail & Related papers (2023-05-02T19:41:17Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Deep learning of spatial densities in inhomogeneous correlated quantum
systems [0.0]
We show that we can learn to predict densities using convolutional neural networks trained on random potentials.
We show that our approach can handle well the interplay of interference and interactions and the behaviour of models with phase transitions in inhomogeneous situations.
arXiv Detail & Related papers (2022-11-16T17:10:07Z) - Identification of quantum entanglement with Siamese convolutional neural networks and semi-supervised learning [0.0]
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms.
In this study, we use deep convolutional NNs, a type of supervised machine learning, to identify quantum entanglement for any bi Partition in a 3-qubit system.
arXiv Detail & Related papers (2022-10-13T23:17:55Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Random Sampling Neural Network for Quantum Many-Body Problems [0.0]
We propose a general numerical method, Random Sampling Neural Networks (RSNN), to utilize the pattern recognition technique for the random sampling matrix elements of an interacting many-body system via a self-supervised learning approach.
Several exactly solvable 1D models, including Ising model with transverse field, Fermi-Hubbard model, and spin-$1/2$ $XXZ$ model, are used to test the applicability of RSNN.
arXiv Detail & Related papers (2020-11-10T15:52:44Z) - GINNs: Graph-Informed Neural Networks for Multiscale Physics [1.1470070927586016]
Graph-Informed Neural Network (GINN) is a hybrid approach combining deep learning with probabilistic graphical models (PGMs)
GINNs produce kernel density estimates of relevant non-Gaussian, skewed QoIs with tight confidence intervals.
arXiv Detail & Related papers (2020-06-26T05:47:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.