Bregman-divergence-based Arimoto-Blahut algorithm
- URL: http://arxiv.org/abs/2408.05454v1
- Date: Sat, 10 Aug 2024 06:16:24 GMT
- Title: Bregman-divergence-based Arimoto-Blahut algorithm
- Authors: Masahito Hayashi,
- Abstract summary: We generalize the Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system.
We propose a convex-optimization-free algorithm that can be applied to classical and quantum rate-distortion theory.
- Score: 53.64687146666141
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We generalize the generalized Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system. In existing methods, when linear constraints are imposed, each iteration needs to solve a convex minimization. Exploiting our obtained algorithm, we propose a convex-optimization-free algorithm. This algorithm can be applied to classical and quantum rate-distortion theory. We numerically apply our method to the derivation of the optimal conditional distribution in the rate-distortion theory.
Related papers
- Generalization of Hamiltonian algorithms [6.835035668445878]
The paper proves generalization results for a class of learning algorithms.
The method applies whenever the algorithm generates an absolutely continuous distribution relative to some a-priori measure.
Applications are bounds for the Gibbs algorithm and randomizations of stable deterministic algorithms as well as PAC-Bayesian bounds with data-dependent priors.
arXiv Detail & Related papers (2024-05-23T11:56:05Z) - Bayesian Design Principles for Frequentist Sequential Learning [11.421942894219901]
We develop a theory to optimize the frequentist regret for sequential learning problems.
We propose a novel optimization approach to generate "algorithmic beliefs" at each round.
We present a novel algorithm for multi-armed bandits that achieves the "best-of-all-worlds" empirical performance.
arXiv Detail & Related papers (2023-10-01T22:17:37Z) - A Bregman Proximal Perspective on Classical and Quantum Blahut-Arimoto Algorithms [6.281229317487581]
The Blahut-Arimoto algorithm is a well-known method to compute classical channel capacities and rate-distortion functions.
Recent works have extended this algorithm to compute various quantum analogs.
We show how these Blahut-Arimoto algorithms are special instances of mirror descent.
arXiv Detail & Related papers (2023-06-07T15:02:10Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Classical and Quantum Iterative Optimization Algorithms Based on Matrix
Legendre-Bregman Projections [1.5736899098702972]
We consider Legendre-Bregman projections defined on the Hermitian matrix space and design iterative optimization algorithms based on them.
We study both exact and approximate Bregman projection algorithms.
In particular, our approximate iterative algorithm gives rise to the non-commutative versions of the generalized iterative scaling (GIS) algorithm for maximum entropy inference.
arXiv Detail & Related papers (2022-09-28T15:59:08Z) - Bregman divergence based em algorithm and its application to classical
and quantum rate distortion theory [61.12008553173672]
We address the minimization problem of the Bregman divergence between an exponential subfamily and a mixture subfamily in a Bregman divergence system.
We apply this algorithm to rate distortion and its variants including the quantum setting.
arXiv Detail & Related papers (2022-01-07T13:33:28Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.