Entanglement measures for detectability
- URL: http://arxiv.org/abs/2311.11189v3
- Date: Tue, 07 Jan 2025 10:40:32 GMT
- Title: Entanglement measures for detectability
- Authors: Masahito Hayashi, Yuki Ito,
- Abstract summary: We propose new entanglement measures as the detection performance based on the hypothesis testing setting.
We clarify how our measures work for detecting an entangled state by extending the quantum Sanov theorem.
We present how to derive an entanglement witness to detect the given entangled state by using the geometrical structure of this measure.
- Score: 45.41082277680607
- License:
- Abstract: We propose new entanglement measures as the detection performance based on the hypothesis testing setting. We clarify how our measures work for detecting an entangled state by extending the quantum Sanov theorem. Our analysis covers the finite-length setting. Exploiting this entanglement measure, we present how to derive an entanglement witness to detect the given entangled state by using the geometrical structure of this measure. We derive their calculation formulas for maximally correlated states, and propose their algorithms that work for general entangled states. In addition, we investigate how our algorithm works for solving the membership problem for separability. Further, employing this algorithm, we propose a method to find entanglement witness for a given entangled state.
Related papers
- Rethinking State Disentanglement in Causal Reinforcement Learning [78.12976579620165]
Causality provides rigorous theoretical support for ensuring that the underlying states can be uniquely recovered through identifiability.
We revisit this research line and find that incorporating RL-specific context can reduce unnecessary assumptions in previous identifiability analyses for latent states.
We propose a novel approach for general partially observable Markov Decision Processes (POMDPs) by replacing the complicated structural constraints in previous methods with two simple constraints for transition and reward preservation.
arXiv Detail & Related papers (2024-08-24T06:49:13Z) - The Reinforce Policy Gradient Algorithm Revisited [7.894349646617293]
We revisit the Reinforce policy gradient algorithm from the literature.
We propose a major enhancement to the basic algorithm.
We provide a proof of convergence for this new algorithm.
arXiv Detail & Related papers (2023-10-08T04:05:13Z) - Algorithm for evaluating distance-based entanglement measures [0.0]
We propose an efficient algorithm for evaluating distance-based entanglement measures.
Our approach builds on Gilbert's algorithm for convex optimization, providing a reliable upper bound on the entanglement of a given arbitrary state.
arXiv Detail & Related papers (2023-08-04T13:42:55Z) - Bounded Projection Matrix Approximation with Applications to Community
Detection [1.8876415010297891]
We introduce a new differentiable convex penalty and derive an alternating direction method of multipliers (ADMM) algorithm.
Numerical experiments demonstrate the superiority of our algorithm over its competitors.
arXiv Detail & Related papers (2023-05-21T06:55:10Z) - Entanglement quantification from collective measurements processed by
machine learning [0.0]
Instead of analytical formulae, we employ artificial neural networks to predict the amount of entanglement in a quantum state.
For the purpose of our research, we consider general two-qubit states and their negativity as entanglement quantifier.
arXiv Detail & Related papers (2022-03-03T10:03:57Z) - Detecting Entanglement in Unfaithful States [0.0]
Entanglement witness is an effective method to detect entanglement in unknown states without doing full tomography.
We propose a new way to detect entanglement by calculating the lower bound of entanglement using measurement results.
arXiv Detail & Related papers (2020-10-12T22:24:22Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z) - Approximate MMAP by Marginal Search [78.50747042819503]
We present a strategy for marginal MAP queries in graphical models.
The proposed confidence measure is properly detecting instances for which the algorithm is accurate.
For sufficiently high confidence levels, the algorithm gives the exact solution or an approximation whose Hamming distance from the exact one is small.
arXiv Detail & Related papers (2020-02-12T07:41:13Z) - Direct estimation of quantum coherence by collective measurements [54.97898890263183]
We introduce a collective measurement scheme for estimating the amount of coherence in quantum states.
Our scheme outperforms other estimation methods based on tomography or adaptive measurements.
We show that our method is accessible with today's technology by implementing it experimentally with photons.
arXiv Detail & Related papers (2020-01-06T03:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.