Instance Specific Approximations for Submodular Maximization
- URL: http://arxiv.org/abs/2102.11911v1
- Date: Tue, 23 Feb 2021 19:39:32 GMT
- Title: Instance Specific Approximations for Submodular Maximization
- Authors: Eric Balkanski, Sharon Qian, Yaron Singer
- Abstract summary: We look for methods to benchmark the performance of algorithms against optimal solutions on real-world instances.
A major question is how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
Our main contribution is not a new algorithm for submodular minimization but an analytical method that measures how close an algorithm for submodular instance is to optimal.
- Score: 45.91235224228292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For many optimization problems in machine learning, finding an optimal
solution is computationally intractable and we seek algorithms that perform
well in practice. Since computational intractability often results from
pathological instances, we look for methods to benchmark the performance of
algorithms against optimal solutions on real-world instances. The main
challenge is that an optimal solution cannot be efficiently computed for
intractable problems, and we therefore often do not know how far a solution is
from being optimal. A major question is therefore how to measure the
performance of an algorithm in comparison to an optimal solution on instances
we encounter in practice.
In this paper, we address this question in the context of submodular
optimization problems. For the canonical problem of submodular maximization
under a cardinality constraint, it is intractable to compute a solution that is
better than a $1-1/e \approx 0.63$ fraction of the optimum. Algorithms like the
celebrated greedy algorithm are guaranteed to achieve this $1-1/e$ bound on any
instance and are used in practice.
Our main contribution is not a new algorithm for submodular maximization but
an analytical method that measures how close an algorithm for submodular
maximization is to optimal on a given problem instance. We use this method to
show that on a wide variety of real-world datasets and objectives, the
approximation of the solution found by greedy goes well beyond $1-1/e$ and is
often at least 0.95. We develop this method using a novel technique that lower
bounds the objective of a dual minimization problem to obtain an upper bound on
the value of an optimal solution to the primal maximization problem.
Related papers
- Improved Parallel Algorithm for Non-Monotone Submodular Maximization under Knapsack Constraint [0.0]
This work proposes an efficient parallel algorithm for non-monomodular size under a knapsack constraint.
Our algorithm improves the existing parallel one from $8+epsilon$ to $7+epsilon$ with $O(log n)$ adaptive complexity.
arXiv Detail & Related papers (2024-09-06T17:17:52Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Adaptive Sampling for Fast Constrained Maximization of Submodular
Function [8.619758302080891]
We develop an algorithm with poly-logarithmic adaptivity for non-monotone submodular under general side constraints.
Our algorithm is suitable to maximize a non-monotone submodular function under a $p$-system side constraint.
arXiv Detail & Related papers (2021-02-12T12:38:03Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Revisiting Modified Greedy Algorithm for Monotone Submodular
Maximization with a Knapsack Constraint [75.85952446237599]
We show that a modified greedy algorithm can achieve an approximation factor of $0.305$.
We derive a data-dependent upper bound on the optimum.
It can also be used to significantly improve the efficiency of such algorithms as branch and bound.
arXiv Detail & Related papers (2020-08-12T15:40:21Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Towards Solving Large-scale Expensive Optimization Problems Efficiently
Using Coordinate Descent Algorithm [3.1600708674008393]
We propose a modified Coordinate Descent algorithm (MCD) to tackle LSEGO problems with a limited computational budget.
Our proposed algorithm benefits from two leading steps, namely, finding the region of interest and then shrinkage of the search space by folding it into the half with exponential speed.
The proposed algorithm is compared with cooperative co-evolution with delta grouping on 20 benchmark functions with dimension 1000.
arXiv Detail & Related papers (2020-03-07T22:48:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.