Regularized Submodular Maximization at Scale
- URL: http://arxiv.org/abs/2002.03503v1
- Date: Mon, 10 Feb 2020 02:37:18 GMT
- Title: Regularized Submodular Maximization at Scale
- Authors: Ehsan Kazemi and Shervin Minaee and Moran Feldman and Amin Karbasi
- Abstract summary: Submodularity is inherently related to the notions of diversity, coverage, and representativeness.
We propose methods for maximizing a regularized submodular function $f = g ell$ expressed as the difference between a determinant submodular function $g$ and a modular function $ell$.
- Score: 45.914693923126826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose scalable methods for maximizing a regularized
submodular function $f = g - \ell$ expressed as the difference between a
monotone submodular function $g$ and a modular function $\ell$. Indeed,
submodularity is inherently related to the notions of diversity, coverage, and
representativeness. In particular, finding the mode of many popular
probabilistic models of diversity, such as determinantal point processes,
submodular probabilistic models, and strongly log-concave distributions,
involves maximization of (regularized) submodular functions. Since a
regularized function $f$ can potentially take on negative values, the classic
theory of submodular maximization, which heavily relies on the non-negativity
assumption of submodular functions, may not be applicable. To circumvent this
challenge, we develop the first one-pass streaming algorithm for maximizing a
regularized submodular function subject to a $k$-cardinality constraint. It
returns a solution $S$ with the guarantee that $f(S)\geq(\phi^{-2}-\epsilon)
\cdot g(OPT)-\ell (OPT)$, where $\phi$ is the golden ratio. Furthermore, we
develop the first distributed algorithm that returns a solution $S$ with the
guarantee that $\mathbb{E}[f(S)] \geq (1-\epsilon) [(1-e^{-1}) \cdot
g(OPT)-\ell(OPT)]$ in $O(1/ \epsilon)$ rounds of MapReduce computation, without
keeping multiple copies of the entire dataset in each round (as it is usually
done). We should highlight that our result, even for the unregularized case
where the modular term $\ell$ is zero, improves the memory and communication
complexity of the existing work by a factor of $O(1/ \epsilon)$ while arguably
provides a simpler distributed algorithm and a unifying analysis. We also
empirically study the performance of our scalable methods on a set of real-life
applications, including finding the mode of distributions, data summarization,
and product recommendation.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Randomized Greedy Learning for Non-monotone Stochastic Submodular
Maximization Under Full-bandit Feedback [98.29086113546045]
We investigate the problem of unconstrained multi-armed bandits with full-bandit feedback and rewards for submodularity.
We show that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.
arXiv Detail & Related papers (2023-02-02T18:52:14Z) - Partial-Monotone Adaptive Submodular Maximization [19.29174615532181]
We show that a sampling-based policy achieves an approximation ratio $(m+1)/10$ utility function is $m$-adaptive monotone and adaptive submodular.
This leads to improved bounds for many machine learning applications whose utility functions are almost adaptive monotone.
arXiv Detail & Related papers (2022-07-26T12:16:12Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - The Power of Subsampling in Submodular Maximization [51.629656762796564]
We show that this approach leads to optimal/state-of-the-art results despite being much simpler than existing methods.
We empirically demonstrate the effectiveness of our algorithms on video summarization, location summarization, and movie recommendation tasks.
arXiv Detail & Related papers (2021-04-06T20:25:57Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization in Linear Time [17.19443570570189]
We study the non-monotone adaptive submodular problem subject to a cardinality constraint.
We show that the adaptive random greedy algorithm achieves a $1/e$ approximation ratio under adaptive submodularity.
We propose a faster algorithm that achieves a $1-1/e-epsilon$ approximation ratio in expectation with $O(nepsilon-2log epsilon-1)$ value oracle queries.
arXiv Detail & Related papers (2020-08-11T21:06:52Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.