Bregman divergence based em algorithm and its application to classical
and quantum rate distortion theory
- URL: http://arxiv.org/abs/2201.02447v3
- Date: Thu, 5 May 2022 03:43:15 GMT
- Title: Bregman divergence based em algorithm and its application to classical
and quantum rate distortion theory
- Authors: Masahito Hayashi
- Abstract summary: We address the minimization problem of the Bregman divergence between an exponential subfamily and a mixture subfamily in a Bregman divergence system.
We apply this algorithm to rate distortion and its variants including the quantum setting.
- Score: 61.12008553173672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate em algorithm in the framework of Bregman divergence, which is a
general problem setting of information geometry. That is, we address the
minimization problem of the Bregman divergence between an exponential subfamily
and a mixture subfamily in a Bregman divergence system. Then, we show the
convergence and its speed under several conditions. We apply this algorithm to
rate distortion and its variants including the quantum setting, and show the
usefulness of our general algorithm. In fact, existing applications of
Arimoto-Blahut algorithm to rate distortion theory make the optimization of the
weighted sum of the mutual information and the cost function by using the
Lagrange multiplier. However, in the rate distortion theory, it is needed to
minimize the mutual information under the constant constraint for the cost
function. Our algorithm directly solves this minimization. In addition, we have
numerically checked the convergence speed of our algorithm in the classical
case of rate distortion problem.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Bregman-divergence-based Arimoto-Blahut algorithm [53.64687146666141]
We generalize the Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system.
We propose a convex-optimization-free algorithm that can be applied to classical and quantum rate-distortion theory.
arXiv Detail & Related papers (2024-08-10T06:16:24Z) - A Bregman Proximal Perspective on Classical and Quantum Blahut-Arimoto Algorithms [6.281229317487581]
The Blahut-Arimoto algorithm is a well-known method to compute classical channel capacities and rate-distortion functions.
Recent works have extended this algorithm to compute various quantum analogs.
We show how these Blahut-Arimoto algorithms are special instances of mirror descent.
arXiv Detail & Related papers (2023-06-07T15:02:10Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - The Last-Iterate Convergence Rate of Optimistic Mirror Descent in
Stochastic Variational Inequalities [29.0058976973771]
We show an intricate relation between the algorithm's rate of convergence and the local geometry induced by the method's underlying Bregman function.
We show that this exponent determines both the optimal step-size policy of the algorithm and the optimal rates attained.
arXiv Detail & Related papers (2021-07-05T09:54:47Z) - A proximal-proximal majorization-minimization algorithm for nonconvex
tuning-free robust regression problems [4.261680642170457]
We introduce proximal-proximal majorization-minimization (PPMM) algorithm for non regression problems.
Our proposed algorithm outperforms the existing state-of-the-art algorithms.
arXiv Detail & Related papers (2021-06-25T15:07:13Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.