Resolving the Approximability of Offline and Online Non-monotone
DR-Submodular Maximization over General Convex Sets
- URL: http://arxiv.org/abs/2210.05965v1
- Date: Wed, 12 Oct 2022 07:04:24 GMT
- Title: Resolving the Approximability of Offline and Online Non-monotone
DR-Submodular Maximization over General Convex Sets
- Authors: Loay Mualem and Moran Feldman
- Abstract summary: DR-submodular continuous functions have many real-worlds applications in the domains of machine learning, communication systems, research and economics.
We present a time online algorithm matching the $tfrac14 (1 - m)$-the-art offline algorithm.
We also show that our algorithm and Du's (2022) offline are both optimal in a strong sense.
- Score: 13.23676270963484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, maximization of DR-submodular continuous functions became an
important research field, with many real-worlds applications in the domains of
machine learning, communication systems, operation research and economics. Most
of the works in this field study maximization subject to down-closed convex set
constraints due to an inapproximability result by Vondr\'ak (2013). However,
Durr et al. (2021) showed that one can bypass this inapproximability by proving
approximation ratios that are functions of $m$, the minimum
$\ell_{\infty}$-norm of any feasible vector. Given this observation, it is
possible to get results for maximizing a DR-submodular function subject to
general convex set constraints, which has led to multiple works on this
problem. The most recent of which is a polynomial time $\tfrac{1}{4}(1 -
m)$-approximation offline algorithm due to Du (2022). However, only a
sub-exponential time $\tfrac{1}{3\sqrt{3}}(1 - m)$-approximation algorithm is
known for the corresponding online problem. In this work, we present a
polynomial time online algorithm matching the $\tfrac{1}{4}(1 -
m)$-approximation of the state-of-the-art offline algorithm. We also present an
inapproximability result showing that our online algorithm and Du's (2022)
offline algorithm are both optimal in a strong sense. Finally, we study the
empirical performance of our algorithm and the algorithm of Du (which was only
theoretically studied previously), and show that they consistently outperform
previously suggested algorithms on revenue maximization, location summarization
and quadratic programming applications.
Related papers
- Practical $0.385$-Approximation for Submodular Maximization Subject to a Cardinality Constraint [18.290666675596984]
Non-monotone constrained submodular subimation plays a crucial role in various machine learning applications.
Existing algorithms often struggle with a trade-off between approximation guarantees and practical efficiency.
We present a novel algorithm that combines a guarantee of $0.385$-approximation with a low and practical query complexity of $O(n+k2)$.
arXiv Detail & Related papers (2024-05-22T20:56:57Z) - Provably Efficient Reinforcement Learning via Surprise Bound [66.15308700413814]
We propose a provably efficient reinforcement learning algorithm (both computationally and statistically) with general value function approximations.
Our algorithm achieves reasonable regret bounds when applied to both the linear setting and the sparse high-dimensional linear setting.
arXiv Detail & Related papers (2023-02-22T20:21:25Z) - Pseudonorm Approachability and Applications to Regret Minimization [73.54127663296906]
We convert high-dimensional $ell_infty$-approachability problems to low-dimensional pseudonorm approachability problems.
We develop an algorithmic theory of pseudonorm approachability, analogous to previous work on approachability for $ell$ and other norms.
arXiv Detail & Related papers (2023-02-03T03:19:14Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Refined Regret for Adversarial MDPs with Linear Function Approximation [50.00022394876222]
We consider learning in an adversarial Decision Process (MDP) where the loss functions can change arbitrarily over $K$ episodes.
This paper provides two algorithms that improve the regret to $tildemathcal O(K2/3)$ in the same setting.
arXiv Detail & Related papers (2023-01-30T14:37:21Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Adaptive Sampling for Fast Constrained Maximization of Submodular
Function [8.619758302080891]
We develop an algorithm with poly-logarithmic adaptivity for non-monotone submodular under general side constraints.
Our algorithm is suitable to maximize a non-monotone submodular function under a $p$-system side constraint.
arXiv Detail & Related papers (2021-02-12T12:38:03Z) - Quick Streaming Algorithms for Maximization of Monotone Submodular
Functions in Linear Time [16.346069386394703]
We consider the problem of monotone, submodular over a ground set of size $n$ subject to cardinality constraint $k$.
We introduce the first deterministic algorithms with linear time complexity; these algorithms are streaming algorithms.
Our single-pass algorithm obtains a constant ratio in $lceil n / c rceil + c$ oracle queries, for any $c ge 1$.
arXiv Detail & Related papers (2020-09-10T16:35:54Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Submodular Maximization in Clean Linear Time [42.51873363778082]
We provide the first deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for submodularimation under a cardinality (size) constraint.
We show that when the cardinality allowed for a solution is constant, no algorithm making a sub-linear number of function evaluations can guarantee any constant ratio.
We extensively evaluate our algorithms on multiple real-life machine learning applications, including movie recommendation, location summarization, twitter text summarization and video summarization.
arXiv Detail & Related papers (2020-06-16T17:06:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.