Improving entanglement purification through coherent superposition of roles
- URL: http://arxiv.org/abs/2408.00844v1
- Date: Thu, 1 Aug 2024 18:00:52 GMT
- Title: Improving entanglement purification through coherent superposition of roles
- Authors: Jorge Miguel-Ramiro, Alexander Pirker, Wolfgang Dür,
- Abstract summary: Entanglement purification and distillation protocols are essential for harnessing the full potential of quantum communication technologies.
We introduce a novel superposed entanglement purification design strategy, leveraging coherent superpositions of the roles of entangled states to enhance purification efficiency.
- Score: 44.99833362998488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Entanglement purification and distillation protocols are essential for harnessing the full potential of quantum communication technologies. Multiple strategies have been proposed to approach and optimize such protocols, most however restricted to Clifford operations. In this paper, we introduce a novel superposed entanglement purification design strategy, leveraging coherent superpositions of the roles of entangled states to enhance purification efficiency. We demonstrate how this approach can be hierarchically integrated with existing entanglement purification strategies, consistently improving protocols performance.
Related papers
- Advantage Distillation for Quantum Key Distribution [0.40964539027092917]
Building on the entanglement distillation protocol, our framework integrates all the existing key distillation methods.
Our framework can achieve higher key rates, particularly without one-time pad encryption for postprocessing.
arXiv Detail & Related papers (2024-04-23T04:27:03Z) - Theoretically Guaranteed Policy Improvement Distilled from Model-Based
Planning [64.10794426777493]
Model-based reinforcement learning (RL) has demonstrated remarkable successes on a range of continuous control tasks.
Recent practices tend to distill optimized action sequences into an RL policy during the training phase.
We develop an approach to distill from model-based planning to the policy.
arXiv Detail & Related papers (2023-07-24T16:52:31Z) - Entanglement Purification of Hypergraph States [0.0]
Entanglement purification describes a primitive in quantum information processing, where several copies of noisy quantum states are distilled into few copies of nearly-pure states of high quality.
We present optimized protocols for the purification of hypergraph states, which form a family of multi-qubit states that are relevant from several perspectives.
arXiv Detail & Related papers (2023-01-26T19:00:01Z) - Optimal two-qubit gates in recurrence protocols of entanglement purification [0.0]
The proposed method is based on a numerical search in the whole set of SU(4) matrices with the aid of a quasi-Newton algorithm.
We show for certain families of states that optimal protocols are not necessarily achieved by bilaterally applied controlled-NOT gates.
arXiv Detail & Related papers (2022-05-24T14:13:56Z) - Data post-processing for the one-way heterodyne protocol under
composable finite-size security [62.997667081978825]
We study the performance of a practical continuous-variable (CV) quantum key distribution protocol.
We focus on the Gaussian-modulated coherent-state protocol with heterodyne detection in a high signal-to-noise ratio regime.
This allows us to study the performance for practical implementations of the protocol and optimize the parameters connected to the steps above.
arXiv Detail & Related papers (2022-05-20T12:37:09Z) - Counterdiabatic Optimised Local Driving [0.0]
Adiabatic protocols are employed across a variety of quantum technologies.
The problem of speeding up these processes has garnered a large amount of interest.
Two approaches are complementary: optimal control manipulates control fields to steer the dynamics.
shortcuts to adiabaticity aim to retain the adiabatic condition upon speed-up.
arXiv Detail & Related papers (2022-03-03T19:00:00Z) - Reinforcement learning-enhanced protocols for coherent
population-transfer in three-level quantum systems [50.591267188664666]
We deploy a combination of reinforcement learning-based approaches and more traditional optimization techniques to identify optimal protocols for population transfer.
Our approach is able to explore the space of possible control protocols to reveal the existence of efficient protocols.
The new protocols that we identify are robust against both energy losses and dephasing.
arXiv Detail & Related papers (2021-09-02T14:17:30Z) - Entanglement-assisted entanglement purification [62.997667081978825]
We present a new class of entanglement-assisted entanglement purification protocols that can generate high-fidelity entanglement from noisy, finite-size ensembles.
Our protocols can deal with arbitrary errors, but are best suited for few errors, and work particularly well for decay noise.
arXiv Detail & Related papers (2020-11-13T19:00:05Z) - Stable Policy Optimization via Off-Policy Divergence Regularization [50.98542111236381]
Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) are among the most successful policy gradient approaches in deep reinforcement learning (RL)
We propose a new algorithm which stabilizes the policy improvement through a proximity term that constrains the discounted state-action visitation distribution induced by consecutive policies to be close to one another.
Our proposed method can have a beneficial effect on stability and improve final performance in benchmark high-dimensional control tasks.
arXiv Detail & Related papers (2020-03-09T13:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.