Learning entanglement from tomography data: contradictory measurement importance for neural networks and random forests
- URL: http://arxiv.org/abs/2505.03371v2
- Date: Thu, 24 Jul 2025 17:18:17 GMT
- Title: Learning entanglement from tomography data: contradictory measurement importance for neural networks and random forests
- Authors: Pavel Baláž, Mateusz Krawczyk, Jarosław Pawłowski, Katarzyna Roszak,
- Abstract summary: We study the effectiveness of two machine learning techniques, neural networks and random forests, in the quantification of entanglement from two-qubit tomography data.<n>Although neural networks yield better accuracy, we also find that the way that the two methods reach their prediction is starkly different.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the effectiveness of two distinct machine learning techniques, neural networks and random forests, in the quantification of entanglement from two-qubit tomography data. Although we predictably find that neural networks yield better accuracy, we also find that the way that the two methods reach their prediction is starkly different. This is seen by the measurements which arthe most important for the classification. Neural networks follow the intuitive prediction that measurements containing information about non-local coherences are most important for entanglement, but random forests signify the dominance of information contained in occupation measurements. This is because occupation measurements are necessary for the extraction of data about all other density matrix elements from the remaining measurements. The same discrepancy does not occur when the models are used to learn entanglement directly from the elements of the density matrix, so it is the result of the scattering of information and interdependence of measurement data. As a result, the models behave differently when noise is introduced to various measurements, which can be harnessed to obtain more reliable information about entanglement from noisy tomography data.
Related papers
- A Simple and Effective Method for Uncertainty Quantification and OOD Detection [0.0]
We propose an effective method based on feature space density to quantify uncertainty for distributional shifts.<n>Specifically, we leverage the information potential field derived from kernel density estimation to approximate the feature space density of the training set.
arXiv Detail & Related papers (2025-08-01T16:31:23Z) - Fusing CFD and measurement data using transfer learning [49.1574468325115]
We introduce a non-linear method based on neural networks combining simulation and measurement data via transfer learning.<n>In a first step, the neural network is trained on simulation data to learn spatial features of the distributed quantities.<n>The second step involves transfer learning on the measurement data to correct for systematic errors between simulation and measurement by only re-training a small subset of the entire neural network model.
arXiv Detail & Related papers (2025-07-28T07:21:46Z) - Unsupervised detection of semantic correlations in big data [47.201377047286215]
We present a method to detect semantic correlations in high-dimensional data represented as binary numbers.<n>We estimate the binary intrinsic dimension of a dataset, which quantifies the minimum number of independent coordinates needed to describe the data.<n>The proposed algorithm is largely insensitive to the so-called curse of dimensionality, and can therefore be used in big data analysis.
arXiv Detail & Related papers (2024-11-04T14:37:07Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Measuring Overfitting in Convolutional Neural Networks using Adversarial
Perturbations and Label Noise [3.395452700023097]
Overfitted neural networks tend to rather memorize noise in the training data than generalize to unseen data.
We introduce several anti-overfitting measures in architectures based on VGG and ResNet.
We assess the applicability of the proposed metrics by measuring the overfitting degree of several CNN architectures outside of our model pool.
arXiv Detail & Related papers (2022-09-27T13:40:53Z) - Estimating informativeness of samples with Smooth Unique Information [108.25192785062367]
We measure how much a sample informs the final weights and how much it informs the function computed by the weights.
We give efficient approximations of these quantities using a linearized network.
We apply these measures to several problems, such as dataset summarization.
arXiv Detail & Related papers (2021-01-17T10:29:29Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.