Low resource entanglement classification from neural network interpretability
- URL: http://arxiv.org/abs/2602.04366v1
- Date: Wed, 04 Feb 2026 09:42:45 GMT
- Title: Low resource entanglement classification from neural network interpretability
- Authors: A. GarcĂa-Velo, R. Puebla, Y. Ban, E. Torrontegui, M. Paraschiv,
- Abstract summary: Entanglement is a central resource in quantum information and quantum technologies.<n>Machine learning has emerged as a promising alternative, enabling entanglement characterization from incomplete measurement data.<n>We introduce a unified and interpretable framework for entanglement classification of two- and three-qubit states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entanglement is a central resource in quantum information and quantum technologies, yet its characterization remains challenging due to both theoretical complexity and measurement requirements. Machine learning has emerged as a promising alternative, enabling entanglement characterization from incomplete measurement data, however model interpretability remains a challenge. In this work, we introduce a unified and interpretable framework for SLOCC entanglement classification of two- and three-qubit states, encompassing both pure and mixed states. We train dense and convolutional neural networks on Pauli-measurement outcomes, provide design guidelines for each architecture, and systematically compare their performance across types of states. To interpret the models, we compute Shapley values to quantify the contribution of each measurement, analyze measurement-importance patterns across different systems, and use these insights to guide a measurement-reduction scheme. Accuracy-versus-measurement curves and comparisons with analytical entanglement criteria demonstrate the minimal resources required for reliable classification and highlight both the capabilities and limitations of Shapley-based interpretability when using machine learning models for entanglement detection and classification.
Related papers
- Towards Worst-Case Guarantees with Scale-Aware Interpretability [58.519943565092724]
Neural networks organize information according to the hierarchical, multi-scale structure of natural data.<n>We propose a unifying research agenda -- emphscale-aware interpretability -- to develop formal machinery and interpretability tools.
arXiv Detail & Related papers (2026-02-05T01:22:31Z) - A Multi-Resolution Benchmark Framework for Spatial Reasoning Assessment in Neural Networks [40.69150418258213]
This paper presents preliminary results in the definition of a comprehensive benchmark framework designed to evaluate spatial reasoning capabilities in neural networks.<n>The framework is currently being used to study the capabilities of nnU-Net.
arXiv Detail & Related papers (2025-08-18T09:04:13Z) - Detecting Entanglement in High-Spin Quantum Systems via a Stacking Ensemble of Machine Learning Models [0.0]
This study examines the effectiveness of ensemble machine learning models as a reliable and scalable approach for estimating entanglement, measured by negativity, in quantum systems.<n>We construct an ensemble regressor integrating Neural Networks (NNs), XGBoost (XGB), and Extra Trees (ET)<n>The ensemble model with stacking meta-learner demonstrates robust performance by CatBoost (CB), accurately predicting negativity across different dimensionalities and state types.
arXiv Detail & Related papers (2025-07-17T04:34:11Z) - Mechanistic understanding and validation of large AI models with SemanticLens [13.712668314238082]
Unlike human-engineered systems such as aeroplanes, the inner workings of AI models remain largely opaque.<n>This paper introduces SemanticLens, a universal explanation method for neural networks that maps hidden knowledge encoded by components.
arXiv Detail & Related papers (2025-01-09T17:47:34Z) - Saliency Assisted Quantization for Neural Networks [0.0]
This paper tackles the inherent black-box nature of deep learning models by providing real-time explanations during the training phase.
We employ established quantization techniques to address resource constraints.
To assess the effectiveness of our approach, we explore how quantization influences the interpretability and accuracy of Convolutional Neural Networks.
arXiv Detail & Related papers (2024-11-07T05:16:26Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
We introduce a theoretical framework that leverages latent directions in input space, which can be related to contextual information.<n>We derive an exact formula for the context-dependent manifold capacity that depends on manifold geometry and context correlations.<n>Our framework's increased expressivity captures representation reformatting in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Evaluating quantum generative models via imbalanced data classification
benchmarks [0.0]
We analyze synthetic data generated from a hybrid quantum-classical neural network adapted from twenty different real-world data sets.
We leverage this approach to elucidate the qualities of a problem that make it more or less likely to be amenable to a hybrid quantum-classical generative model.
arXiv Detail & Related papers (2023-08-21T16:46:36Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Estimating informativeness of samples with Smooth Unique Information [108.25192785062367]
We measure how much a sample informs the final weights and how much it informs the function computed by the weights.
We give efficient approximations of these quantities using a linearized network.
We apply these measures to several problems, such as dataset summarization.
arXiv Detail & Related papers (2021-01-17T10:29:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.