Faster First-Order Algorithms for Monotone Strongly DR-Submodular
Maximization
- URL: http://arxiv.org/abs/2111.07990v1
- Date: Mon, 15 Nov 2021 18:53:14 GMT
- Title: Faster First-Order Algorithms for Monotone Strongly DR-Submodular
Maximization
- Authors: Omid Sadeghi, Maryam Fazel
- Abstract summary: Continuous-submodular functions satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions.
We propose a new algorithm that matches the provably optimal $1-fracce$ approximation ratio after only $lceilfracLmurce$.
- Score: 11.919431962501479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous DR-submodular functions are a class of generally
non-convex/non-concave functions that satisfy the Diminishing Returns (DR)
property, which implies that they are concave along non-negative directions.
Existing work has studied monotone continuous DR-submodular maximization
subject to a convex constraint and provided efficient algorithms with
approximation guarantees. In many applications, such as computing the stability
number of a graph, the monotone DR-submodular objective function has the
additional property of being strongly concave along non-negative directions
(i.e., strongly DR-submodular). In this paper, we consider a subclass of
$L$-smooth monotone DR-submodular functions that are strongly DR-submodular and
have a bounded curvature, and we show how to exploit such additional structure
to obtain faster algorithms with stronger guarantees for the maximization
problem. We propose a new algorithm that matches the provably optimal
$1-\frac{c}{e}$ approximation ratio after only $\lceil\frac{L}{\mu}\rceil$
iterations, where $c\in[0,1]$ and $\mu\geq 0$ are the curvature and the strong
DR-submodularity parameter. Furthermore, we study the Projected Gradient Ascent
(PGA) method for this problem, and provide a refined analysis of the algorithm
with an improved $\frac{1}{1+c}$ approximation ratio (compared to $\frac{1}{2}$
in prior works) and a linear convergence rate. Experimental results illustrate
and validate the efficiency and effectiveness of our proposed algorithms.
Related papers
- Boosting Gradient Ascent for Continuous DR-submodular Maximization [18.040022136474114]
Projected Gradient Ascent (PGA) is the most commonly used optimization scheme in machine learning and operations research areas.
We present a boosting technique which can efficiently improve the approximation guarantee of the standard PGA to emphoptimal with only small modifications on the objective function.
Our resulting variants of boosting PGA beat the previous standard PGA in several aspects such as approximation ratio and efficiency.
arXiv Detail & Related papers (2024-01-16T12:49:10Z) - Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback [75.29048190099523]
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions.
In this paper, we design a fully adaptive OGD algorithm, textsfAdaOGD, that does not require a priori knowledge of these parameters.
arXiv Detail & Related papers (2023-10-21T18:38:13Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Communication-Efficient Decentralized Online Continuous DR-Submodular
Maximization [11.889570525184801]
We present two decentralized online algorithms for the monotone continuous DR-submodular-Frank problem.
The first one, One-shot Decentralized Meta-Wolfe (Mono-DMFW), achieves a $(1-1/e)$regret bound $O(T4/5)$.
Next, inspired by the non-oblivious boosting function citepzhang2022, we propose the Decentralized Online Boosting Gradient Ascent (DOBGA) algorithm.
arXiv Detail & Related papers (2022-08-18T07:32:28Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Fast and Private Submodular and $k$-Submodular Functions Maximization
with Matroid Constraints [27.070004659301162]
We study the problem of maximizing monotone submodular functions subject to matroid constraints in the framework of differential privacy.
We give the first $frac12$-approximation algorithm that preserves $k$submodular functions subject to matroid constraints.
arXiv Detail & Related papers (2020-06-28T23:18:58Z) - Continuous Submodular Function Maximization [91.17492610120324]
Continuous submodularity is a class of functions with a wide spectrum of applications.
We identify several applications of continuous submodular optimization, ranging from influence, MAP for inferences to inferences to field field.
arXiv Detail & Related papers (2020-06-24T04:37:31Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.