Random Stinespring superchannel: converting channel queries into dilation isometry queries
- URL: http://arxiv.org/abs/2512.20599v1
- Date: Tue, 23 Dec 2025 18:46:07 GMT
- Title: Random Stinespring superchannel: converting channel queries into dilation isometry queries
- Authors: Filippo Girardi, Francesco Anna Mele, Haimeng Zhao, Marco Fanizza, Ludovico Lami,
- Abstract summary: We introduce a channel-level analogue, which we call the random Stinespring superchannel.<n>We show that the optimal query complexity of learning a quantum channel with input dimension $d_A$, output dimension $d_B$, and Choi rank $r$ is $(d_A d_B r)$.
- Score: 9.841060883971746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently introduced random purification channel, which converts $n$ copies of an arbitrary mixed quantum state into $n$ copies of the same uniformly random purification, has emerged as a powerful tool in quantum information theory. Motivated by this development, we introduce a channel-level analogue, which we call the random Stinespring superchannel. This consists in a procedure to transform $n$ parallel queries of an arbitrary quantum channel into $n$ parallel queries of the same uniformly random Stinespring isometry, via universal encoding and decoding operations that are efficiently implementable. When the channel is promised to have Choi rank at most $r$, the procedure can be tailored to yield a Stinespring environment of dimension $r$. As a consequence, quantum channel learning reduces to isometry learning, yielding a simple channel learning algorithm, based on existing isometry learning protocols, that matches the performance of the two recently proposed channel tomography algorithms. Complementarily, whereas the optimality of these algorithms had previously been established only up to a logarithmic factor in the dimension, we close this gap by removing this logarithmic factor from the lower bound. Taken together, our results fully establish the optimality of these recently introduced channel learning algorithms, showing that the optimal query complexity of learning a quantum channel with input dimension $d_A$, output dimension $d_B$, and Choi rank $r$ is $Θ(d_A d_B r)$.
Related papers
- Random dilation superchannel [0.0]
We present a quantum circuit that implements the random dilation superchannel.<n>We show an efficient storage-and-retrieval of an unknown quantum channel.
arXiv Detail & Related papers (2025-12-24T16:09:38Z) - Optimal Distributed Similarity Estimation for Unitary Channels [22.133088494335713]
We study distributed similarity estimation for unitary channels (DSEU)<n>For $n$-qubit unitary channels, the query complexity of DSEU is $(sqrtd)$, where $d=2n$, for both incoherent and coherent accesses.<n>Our results provide practical and theoretically optimal tools for quantum devices benchmarking and for distributed quantum learning.
arXiv Detail & Related papers (2025-12-11T09:40:31Z) - Optimal learning of quantum channels in diamond distance [0.0]
We show that a quantum channel acting on a $d$-dimensional system can be estimated to accuracy $varepsilon$ in diamond distance.<n>We obtain, to the best of our knowledge, the first essentially optimal strategies for operator-norm learning of binary POVMs and isometries.
arXiv Detail & Related papers (2025-12-11T02:04:03Z) - On the query complexity of unitary channel certification [0.0]
Certifying the correct functioning of a unitary channel is a critical step toward reliable quantum information processing.<n>We show that incoherent algorithms-those without quantum memory-require $Omega(d/varepsilon2)$ queries, matching the known upper bound.
arXiv Detail & Related papers (2025-07-23T06:51:33Z) - Quantum Advantage in Storage and Retrieval of Isometry Channels [0.8192907805418581]
We analyze the performance of the classical and quantum strategies for the storage and retrieval of isometry channels.<n>We propose a more efficient quantum strategy based on port-based teleportation, which stores the isometry channel in a program state.
arXiv Detail & Related papers (2025-07-14T20:18:12Z) - Deterministic identification over channels with finite output: a dimensional perspective on superlinear rates [49.126395046088014]
We consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets.<n>Our main findings are that the maximum length of messages thus identifiable scales superlinearly as $R,nlog n$ with the block length $n$.<n>We show that it is sufficient to ensure pairwise reliable distinguishability of the output distributions to construct a DI code.
arXiv Detail & Related papers (2024-02-14T11:59:30Z) - Deterministic Nonsmooth Nonconvex Optimization [82.39694252205011]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.<n>Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Revisiting Random Channel Pruning for Neural Network Compression [159.99002793644163]
Channel (or 3D filter) pruning serves as an effective way to accelerate the inference of neural networks.
In this paper, we try to determine the channel configuration of the pruned models by random search.
We show that this simple strategy works quite well compared with other channel pruning methods.
arXiv Detail & Related papers (2022-05-11T17:59:04Z) - Twisted hybrid algorithms for combinatorial optimization [68.8204255655161]
Proposed hybrid algorithms encode a cost function into a problem Hamiltonian and optimize its energy by varying over a set of states with low circuit complexity.
We show that for levels $p=2,ldots, 6$, the level $p$ can be reduced by one while roughly maintaining the expected approximation ratio.
arXiv Detail & Related papers (2022-03-01T19:47:16Z) - Weighted Quantum Channel Compiling through Proximal Policy Optimization [0.0]
We propose a strategy to compile arbitrary quantum channels without using ancillary qubits.
We show that our proposed algorithm can conveniently and effectively reduce the use of expensive elementary gates.
arXiv Detail & Related papers (2021-11-03T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.