A Mirror-Descent Algorithm for Computing the Petz-Rényi Capacity of Classical-Quantum Channels
- URL: http://arxiv.org/abs/2601.10558v1
- Date: Thu, 15 Jan 2026 16:22:52 GMT
- Title: A Mirror-Descent Algorithm for Computing the Petz-Rényi Capacity of Classical-Quantum Channels
- Authors: Yu-Hong Lai, Hao-Chung Cheng,
- Abstract summary: We study the computation of the capacity of a classical-quantum (c-q) channel for $in(0,1)$.<n>We propose an exponentiated-gradient (mirror descent) that generalizes the Blahut-Arimoto algorithm.
- Score: 13.098901971644656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the computation of the $α$-Rényi capacity of a classical-quantum (c-q) channel for $α\in(0,1)$. We propose an exponentiated-gradient (mirror descent) iteration that generalizes the Blahut-Arimoto algorithm. Our analysis establishes relative smoothness with respect to the entropy geometry, guaranteeing a global sublinear convergence of the objective values. Furthermore, under a natural tangent-space nondegeneracy condition (and a mild spectral lower bound in one regime), we prove local linear (geometric) convergence in Kullback-Leibler divergence on a truncated probability simplex, with an explicit contraction factor once the local curvature constants are bounded.
Related papers
- Graph-based Clustering Revisited: A Relaxation of Kernel $k$-Means Perspective [73.18641268511318]
We propose a graph-based clustering algorithm that only relaxes the orthonormal constraint to derive clustering results.<n>To ensure a doubly constraint into a gradient, we transform the non-negative constraint into a class probability parameter.
arXiv Detail & Related papers (2025-09-23T09:14:39Z) - Euclidean Distance Matrix Completion via Asymmetric Projected Gradient Descent [25.846262685970164]
This paper proposes and analyzes a gradient-type algorithm based on Burer-Monteiro factorization.<n>It reconstructs the point set configuration from partial Euclidean distance measurements.
arXiv Detail & Related papers (2025-04-28T07:13:23Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - A Bregman Proximal Perspective on Classical and Quantum Blahut-Arimoto Algorithms [6.281229317487581]
The Blahut-Arimoto algorithm is a well-known method to compute classical channel capacities and rate-distortion functions.
Recent works have extended this algorithm to compute various quantum analogs.
We show how these Blahut-Arimoto algorithms are special instances of mirror descent.
arXiv Detail & Related papers (2023-06-07T15:02:10Z) - Convergence of Adam Under Relaxed Assumptions [72.24779199744954]
We show that Adam converges to $epsilon$-stationary points with $O(epsilon-4)$ gradient complexity under far more realistic conditions.
We also propose a variance-reduced version of Adam with an accelerated gradient complexity of $O(epsilon-3)$.
arXiv Detail & Related papers (2023-04-27T06:27:37Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Unadjusted Langevin algorithm for sampling a mixture of weakly smooth
potentials [0.0]
We prove convergence guarantees under Poincar'e inequality or non-strongly convex outside the ball.
We also provide convergence in $L_beta$-Wasserstein metric for the smoothing potential.
arXiv Detail & Related papers (2021-12-17T04:10:09Z) - Lipschitz Analysis of Generalized Phase Retrievable Matrix Frames [0.0]
We provide computable global stability bounds for the quasi-linear analysis map $beta$.
We show that for the impure state case no such global stability bounds can be obtained for the non-linear computation analysis map $alpha$.
Our computation of the global lower Lipschitz constant for the $beta$ analysis map provides novel conditions for a frame to be generalized phase retrievable.
arXiv Detail & Related papers (2021-09-29T16:16:55Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - A Continuous-Time Mirror Descent Approach to Sparse Phase Retrieval [24.17778927729799]
We analyze continuous-time mirror applied to sparse phase retrieval.
It is the problem of recovering sparse signals from a set of only measurements.
We provide a convergence analysis algorithm for this problem.
arXiv Detail & Related papers (2020-10-20T10:03:44Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.