Mini-batch Submodular Maximization
- URL: http://arxiv.org/abs/2401.12478v1
- Date: Tue, 23 Jan 2024 04:16:58 GMT
- Title: Mini-batch Submodular Maximization
- Authors: Gregory Schwartzman
- Abstract summary: We present the first mini-batch algorithm for maximizing a non-negative monotone decomposable submodular function.
We experimentally observe that our algorithm generates solutions that are far superior to those generated by the sparsifier based approach.
- Score: 5.439020425819001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first mini-batch algorithm for maximizing a non-negative
monotone decomposable submodular function, $F=\sum_{i=1}^N f^i$, under a set of
constraints. We improve over the sparsifier based approach both in theory and
in practice. We experimentally observe that our algorithm generates solutions
that are far superior to those generated by the sparsifier based approach.
Related papers
- Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Randomized Greedy Learning for Non-monotone Stochastic Submodular
Maximization Under Full-bandit Feedback [98.29086113546045]
We investigate the problem of unconstrained multi-armed bandits with full-bandit feedback and rewards for submodularity.
We show that RGL empirically outperforms other full-bandit variants in submodular and non-submodular settings.
arXiv Detail & Related papers (2023-02-02T18:52:14Z) - Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee [86.05440220344755]
We propose and analyze inexact regularized Newton-type methods for finding a global saddle point of emphcon unconstrained min-max optimization problems.
We show that the proposed methods generate iterates that remain within a bounded set and that the iterations converge to an $epsilon$-saddle point within $O(epsilon-2/3)$ in terms of a restricted function.
arXiv Detail & Related papers (2022-10-23T21:24:37Z) - Faster First-Order Algorithms for Monotone Strongly DR-Submodular
Maximization [11.919431962501479]
Continuous-submodular functions satisfy the Diminishing Returns (DR) property, which implies that they are concave along non-negative directions.
We propose a new algorithm that matches the provably optimal $1-fracce$ approximation ratio after only $lceilfracLmurce$.
arXiv Detail & Related papers (2021-11-15T18:53:14Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Fast and Private Submodular and $k$-Submodular Functions Maximization
with Matroid Constraints [27.070004659301162]
We study the problem of maximizing monotone submodular functions subject to matroid constraints in the framework of differential privacy.
We give the first $frac12$-approximation algorithm that preserves $k$submodular functions subject to matroid constraints.
arXiv Detail & Related papers (2020-06-28T23:18:58Z) - Submodular Maximization in Clean Linear Time [42.51873363778082]
We provide the first deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for submodularimation under a cardinality (size) constraint.
We show that when the cardinality allowed for a solution is constant, no algorithm making a sub-linear number of function evaluations can guarantee any constant ratio.
We extensively evaluate our algorithms on multiple real-life machine learning applications, including movie recommendation, location summarization, twitter text summarization and video summarization.
arXiv Detail & Related papers (2020-06-16T17:06:45Z) - Exploiting Higher Order Smoothness in Derivative-free Optimization and
Continuous Bandits [99.70167985955352]
We study the problem of zero-order optimization of a strongly convex function.
We consider a randomized approximation of the projected gradient descent algorithm.
Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters.
arXiv Detail & Related papers (2020-06-14T10:42:23Z) - Streaming Submodular Maximization under a $k$-Set System Constraint [42.31117997337689]
We propose a novel framework that converts streaming for monotone submodular into streaming for non-monotone submodular.
We also propose the first algorithm for monotone submodular streaming subject to $k$ible $k$-set system constraints.
arXiv Detail & Related papers (2020-02-09T12:32:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.