A Constrained BA Algorithm for Rate-Distortion and Distortion-Rate
Functions
- URL: http://arxiv.org/abs/2305.02650v2
- Date: Thu, 18 Jan 2024 09:25:44 GMT
- Title: A Constrained BA Algorithm for Rate-Distortion and Distortion-Rate
Functions
- Authors: Lingyi Chen, Shitong Wu, Wenhao Ye, Huihui Wu, Wenyi Zhang, Hao Wu and
Bo Bai
- Abstract summary: modification of Blahut-Arimoto (BA) algorithm for rate-distortion functions.
modified algorithm directly computes the RD function for a given target distortion.
- Score: 13.570794979535934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Blahut-Arimoto (BA) algorithm has played a fundamental role in the
numerical computation of rate-distortion (RD) functions. This algorithm
possesses a desirable monotonic convergence property by alternatively
minimizing its Lagrangian with a fixed multiplier. In this paper, we propose a
novel modification of the BA algorithm, wherein the multiplier is updated
through a one-dimensional root-finding step using a monotonic univariate
function, efficiently implemented by Newton's method in each iteration.
Consequently, the modified algorithm directly computes the RD function for a
given target distortion, without exploring the entire RD curve as in the
original BA algorithm. Moreover, this modification presents a versatile
framework, applicable to a wide range of problems, including the computation of
distortion-rate (DR) functions. Theoretical analysis shows that the outputs of
the modified algorithms still converge to the solutions of the RD and DR
functions with rate $O(1/n)$, where $n$ is the number of iterations.
Additionally, these algorithms provide $\varepsilon$-approximation solutions
with $O\left(\frac{MN\log N}{\varepsilon}(1+\log |\log \varepsilon|)\right)$
arithmetic operations, where $M,N$ are the sizes of source and reproduced
alphabets respectively. Numerical experiments demonstrate that the modified
algorithms exhibit significant acceleration compared with the original BA
algorithms and showcase commendable performance across classical source
distributions such as discretized Gaussian, Laplacian and uniform sources.
Related papers
- Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Alternatives to a nonhomogeneous partial differential equation quantum
algorithm [52.77024349608834]
We propose a quantum algorithm for solving nonhomogeneous linear partial differential equations of the form $Apsi(textbfr)=f(textbfr)$.
These achievements enable easier experimental implementation of the quantum algorithm based on nowadays technology.
arXiv Detail & Related papers (2022-05-11T14:29:39Z) - Alternating Mahalanobis Distance Minimization for Stable and Accurate CP
Decomposition [4.847980206213335]
We introduce a new formulation for deriving singular values and vectors of a tensor by considering the critical points of a function different from what is used in the previous work.
We show that a subsweep of this algorithm can achieve a superlinear convergence rate for exact CPD with known rank.
We then view the algorithm as optimizing a Mahalanobis distance with respect to each factor with ground metric dependent on the other factors.
arXiv Detail & Related papers (2022-04-14T19:56:36Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Faster First-Order Algorithms for Monotone Strongly DR-Submodular
Maximization [11.919431962501479]
Continuous-submodular functions satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions.
We propose a new algorithm that matches the provably optimal $1-fracce$ approximation ratio after only $lceilfracLmurce$.
arXiv Detail & Related papers (2021-11-15T18:53:14Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - A Domain-Shrinking based Bayesian Optimization Algorithm with
Order-Optimal Regret Performance [16.0251555430107]
This is the first GP-based algorithm with an order-optimal regret guarantee.
Compared with the prevailing GP-UCB family of algorithms, the proposed algorithm reduces computational complexity by a factor of $O(T2d-1)$.
arXiv Detail & Related papers (2020-10-27T02:15:15Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.