BALPA: A Balanced Primal-Dual Algorithm for Nonsmooth Optimization with
Application to Distributed Optimization
- URL: http://arxiv.org/abs/2212.02835v1
- Date: Tue, 6 Dec 2022 09:18:31 GMT
- Title: BALPA: A Balanced Primal-Dual Algorithm for Nonsmooth Optimization with
Application to Distributed Optimization
- Authors: Luyao Guo, Jinde Cao, Xinli Shi, Shaofu Yang
- Abstract summary: We propose a novel primal-dual proximal splitting algorithm (PD-PSA) for the composite optimization problem with equality constraints.
In BALPA, the dual update is designed as a proximal point for a time-varying quadratic function, which balances the implementation of primal and dual update.
We propose a version of BALPA (S-BALPA) and apply the developed BALPA to devise a new distributed optimization algorithm.
- Score: 39.67743321086165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel primal-dual proximal splitting algorithm
(PD-PSA), named BALPA, for the composite optimization problem with equality
constraints, where the loss function consists of a smooth term and a nonsmooth
term composed with a linear mapping. In BALPA, the dual update is designed as a
proximal point for a time-varying quadratic function, which balances the
implementation of primal and dual update and retains the proximity-induced
feature of classic PD-PSAs. In addition, by this balance, BALPA eliminates the
inefficiency of classic PD-PSAs for composite optimization problems in which
the Euclidean norm of the linear mapping or the equality constraint mapping is
large. Therefore, BALPA not only inherits the advantages of simple structure
and easy implementation of classic PD-PSAs but also ensures a fast convergence
when these norms are large. Moreover, we propose a stochastic version of BALPA
(S-BALPA) and apply the developed BALPA to distributed optimization to devise a
new distributed optimization algorithm. Furthermore, a comprehensive
convergence analysis for BALPA and S-BALPA is conducted, respectively. Finally,
numerical experiments demonstrate the efficiency of the proposed algorithms.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - From Inverse Optimization to Feasibility to ERM [11.731853838892487]
We study the contextual inverse setting that utilizes additional contextual information to better predict parameters.
We experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.
arXiv Detail & Related papers (2024-02-27T21:06:42Z) - On Linear Convergence of PI Consensus Algorithm under the Restricted Secant Inequality [5.35599092568615]
This paper considers solving distributed optimization problems in peer-to-peer multi-agent networks.
By using the proportional-integral (PI) control strategy, various algorithms with fixed stepsize have been developed.
arXiv Detail & Related papers (2023-09-30T15:54:52Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - Proximal Point Imitation Learning [48.50107891696562]
We develop new algorithms with rigorous efficiency guarantees for infinite horizon imitation learning.
We leverage classical tools from optimization, in particular, the proximal-point method (PPM) and dual smoothing.
We achieve convincing empirical performance for both linear and neural network function approximation.
arXiv Detail & Related papers (2022-09-22T12:40:21Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.