Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization
- URL: http://arxiv.org/abs/2501.18183v1
- Date: Thu, 30 Jan 2025 07:28:34 GMT
- Title: Decentralized Projection-free Online Upper-Linearizable Optimization with Applications to DR-Submodular Optimization
- Authors: Yiyang Lu, Mohammad Pedramfar, Vaneet Aggarwal,
- Abstract summary: We introduce a novel framework for decentralized projection-free optimization.
We leverage decentralized optimization techniques with the flexibility of upper-linearizable function frameworks.
- Score: 29.705337940879705
- License:
- Abstract: We introduce a novel framework for decentralized projection-free optimization, extending projection-free methods to a broader class of upper-linearizable functions. Our approach leverages decentralized optimization techniques with the flexibility of upper-linearizable function frameworks, effectively generalizing traditional DR-submodular function optimization. We obtain the regret of $O(T^{1-\theta/2})$ with communication complexity of $O(T^{\theta})$ and number of linear optimization oracle calls of $O(T^{2\theta})$ for decentralized upper-linearizable function optimization, for any $0\le \theta \le 1$. This approach allows for the first results for monotone up-concave optimization with general convex constraints and non-monotone up-concave optimization with general convex constraints. Further, the above results for first order feedback are extended to zeroth order, semi-bandit, and bandit feedback.
Related papers
- A Novel Unified Parametric Assumption for Nonconvex Optimization [53.943470475510196]
Non optimization is central to machine learning, but the general framework non convexity enables weak convergence guarantees too pessimistic compared to the other hand.
We introduce a novel unified assumption in non convex algorithms.
arXiv Detail & Related papers (2025-02-17T21:25:31Z) - MARINA-P: Superior Performance in Non-smooth Federated Optimization with Adaptive Stepsizes [57.24311218570012]
We extend the non-smooth convex theory of EF21-P (Anonymous 2024) and MARINA-P (arXiv:2402.06412) in the non-size convex setting.
We provide theoretical guarantees under constant, decreasing, and adaptive (aktypetype) steps.
arXiv Detail & Related papers (2024-12-22T16:18:34Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - From Linear to Linearizable Optimization: A Novel Framework with Applications to Stationary and Non-stationary DR-submodular Optimization [33.38582292895673]
This paper introduces the notion of concavity and DR-submodularity in various settings, including monotone non-linear and DR-submodularity.
A general meta-algorithmm converts linear/quadratic into ones that optimize upper-linear/quadratizable functions.
arXiv Detail & Related papers (2024-04-27T06:19:30Z) - Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization [28.598226670015315]
This paper introduces unified projection-free Frank-Wolfe type algorithms for adversarial DR-submodular optimization.
For every problem considered in the non-monotone setting, the proposed algorithms are either the first with proven sub-linear $alpha$-regret bounds or have better $alpha$-regret bounds than the state of the art.
arXiv Detail & Related papers (2024-03-15T07:05:44Z) - An Accelerated Gradient Method for Convex Smooth Simple Bilevel Optimization [16.709026203727007]
We present a novel bilevel optimization method that locally approximates the solution set of the lower-level problem.
We measure the performance of our method in terms of suboptimality and infeasibility errors.
arXiv Detail & Related papers (2024-02-12T22:34:53Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Block majorization-minimization with diminishing radius for constrained nonsmooth nonconvex optimization [8.386501595252]
Block majorization-minimativeization (BMM) is a simple iterative algorithm for constrained nonnegative surrogates.
We show that BMM produces a novel first-order optimality measure for various algorithms.
We also demonstrate that the additional use of diminishing radius can improve the convergence rate of BMM in many instances.
arXiv Detail & Related papers (2020-12-07T07:53:09Z) - Stochastic Coordinate Minimization with Progressive Precision for
Stochastic Convex Optimization [16.0251555430107]
A framework based on iterative coordinate minimization (CM) is developed for convex optimization.
We establish the optimal precision control and the resulting order-optimal regret performance.
The proposed algorithm is amenable to online implementation and inherits the scalability and parallelizability properties of CM for large-scale optimization.
arXiv Detail & Related papers (2020-03-11T18:42:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.