Optimal one-shot entanglement sharing
- URL: http://arxiv.org/abs/2301.01781v2
- Date: Fri, 6 Oct 2023 16:13:07 GMT
- Title: Optimal one-shot entanglement sharing
- Authors: Vikesh Siddhu and John Smolin
- Abstract summary: We discuss a practical setting where a quantum channel is used once with the aim of sharing high fidelity entanglement.
For any channel, we provide methods to easily find both this maximum fidelity and optimal inputs that achieve it.
This ensures a complete understanding in the sense that the maximum fidelity and optimal inputs found in our one-shot setting extend even when the channel is used multiple times.
- Score: 3.2634122554914002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sharing entanglement across quantum interconnects is fundamental for quantum
information processing. We discuss a practical setting where this interconnect,
modeled by a quantum channel, is used once with the aim of sharing high
fidelity entanglement. For any channel, we provide methods to easily find both
this maximum fidelity and optimal inputs that achieve it. Unlike most metrics
for sharing entanglement, this maximum fidelity can be shown to be
multiplicative. This ensures a complete understanding in the sense that the
maximum fidelity and optimal inputs found in our one-shot setting extend even
when the channel is used multiple times, possibly with other channels. Optimal
inputs need not be fully entangled. We find the minimum entanglement in these
optimal inputs can even vary discontinuously with channel noise. Generally,
noise parameters are hard to identify and remain unknown for most channels.
However, for all qubit channels with qubit environments, we provide a rigorous
noise parametrization which we explain in-terms of no-cloning. This noise
parametrization and a channel representation we call the standard Kraus
decomposition have pleasing properties that make them both useful more
generally.
Related papers
- Unextendible entanglement of quantum channels [4.079147243688764]
We study the ability of quantum channels to perform quantum communication tasks.
A quantum channel can distill a highly entangled state between two parties.
We generalize the formalism of $k$-extendibility to bipartite superchannels.
arXiv Detail & Related papers (2024-07-22T18:00:17Z) - Accurate and Honest Approximation of Correlated Qubit Noise [39.58317527488534]
We propose an efficient systematic construction of approximate noise channels, where their accuracy can be enhanced by incorporating noise components with higher qubit-qubit correlation degree.
We find that, for realistic noise strength typical for fixed-frequency superconducting qubits, correlated noise beyond two-qubit correlation can significantly affect the code simulation accuracy.
arXiv Detail & Related papers (2023-11-15T19:00:34Z) - Multiparameter estimation with two qubit probes in noisy channels [0.618778092044887]
This work compares the performance of single and two qubit probes for estimating several phase rotations simultaneously.
We compute the quantum limits for this simultaneous estimation using collective and individual measurements.
In sufficiently noisy channels, we show that it is possible for single qubit probes to outperform maximally entangled two qubit probes.
arXiv Detail & Related papers (2023-07-26T03:20:48Z) - Simultaneous superadditivity of the direct and complementary channel
capacities [0.0]
We show that coherent and private information of a channel and its complement can be simultaneously superadditive for arbitrarily many channel uses.
For a varying number of channel uses, we show that these quantities can obey different interleaving sequences of inequalities.
arXiv Detail & Related papers (2023-01-12T16:58:12Z) - Revisiting Random Channel Pruning for Neural Network Compression [159.99002793644163]
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of neural networks.
In this paper, we try to determine the channel configuration of the pruned models by random search.
We show that this simple strategy works quite well compared with other channel pruning methods.
arXiv Detail & Related papers (2022-05-11T17:59:04Z) - Group Fisher Pruning for Practical Network Compression [58.25776612812883]
We present a general channel pruning approach that can be applied to various complicated structures.
We derive a unified metric based on Fisher information to evaluate the importance of a single channel and coupled channels.
Our method can be used to prune any structures including those with coupled channels.
arXiv Detail & Related papers (2021-08-02T08:21:44Z) - Statistical intrusion detection and eavesdropping in quantum channels
with coupling: Multiple-preparation and single-preparation methods [2.2469167925905777]
Non-quantum communications include configurations with multiple-input multiple-output (MIMO) channels.
Some associated signal processing tasks consider these channels in a symmetric way, i.e. by assigning the same role to all inputs.
We here address asymmetric (blind and non-blind) ones, with emphasis on intrusion detection and additional comments about eavesdropping.
arXiv Detail & Related papers (2021-06-17T07:04:54Z) - Channel-wise Knowledge Distillation for Dense Prediction [73.99057249472735]
We propose to align features channel-wise between the student and teacher networks.
We consistently achieve superior performance on three benchmarks with various network structures.
arXiv Detail & Related papers (2020-11-26T12:00:38Z) - Entanglement-assisted entanglement purification [62.997667081978825]
We present a new class of entanglement-assisted entanglement purification protocols that can generate high-fidelity entanglement from noisy, finite-size ensembles.
Our protocols can deal with arbitrary errors, but are best suited for few errors, and work particularly well for decay noise.
arXiv Detail & Related papers (2020-11-13T19:00:05Z) - Channel-Level Variable Quantization Network for Deep Image Compression [50.3174629451739]
We propose a channel-level variable quantization network to dynamically allocate more convolutions for significant channels and withdraws for negligible channels.
Our method achieves superior performance and can produce much better visual reconstructions.
arXiv Detail & Related papers (2020-07-15T07:20:39Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.