Adaptive Sampling for Fast Constrained Maximization of Submodular
Function
- URL: http://arxiv.org/abs/2102.06486v1
- Date: Fri, 12 Feb 2021 12:38:03 GMT
- Title: Adaptive Sampling for Fast Constrained Maximization of Submodular
Function
- Authors: Francesco Quinzan and Vanja Dosko\v{c} and Andreas G\"obel and Tobias
Friedrich
- Abstract summary: We develop an algorithm with poly-logarithmic adaptivity for non-monotone submodular under general side constraints.
Our algorithm is suitable to maximize a non-monotone submodular function under a $p$-system side constraint.
- Score: 8.619758302080891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several large-scale machine learning tasks, such as data summarization, can
be approached by maximizing functions that satisfy submodularity. These
optimization problems often involve complex side constraints, imposed by the
underlying application. In this paper, we develop an algorithm with
poly-logarithmic adaptivity for non-monotone submodular maximization under
general side constraints. The adaptive complexity of a problem is the minimal
number of sequential rounds required to achieve the objective.
Our algorithm is suitable to maximize a non-monotone submodular function
under a $p$-system side constraint, and it achieves a $(p +
O(\sqrt{p}))$-approximation for this problem, after only poly-logarithmic
adaptive rounds and polynomial queries to the valuation oracle function.
Furthermore, our algorithm achieves a $(p + O(1))$-approximation when the given
side constraint is a $p$-extendible system.
This algorithm yields an exponential speed-up, with respect to the
adaptivity, over any other known constant-factor approximation algorithm for
this problem. It also competes with previous known results in terms of the
query complexity. We perform various experiments on various real-world
applications. We find that, in comparison with commonly used heuristics, our
algorithm performs better on these instances.
Related papers
- Improved Parallel Algorithm for Non-Monotone Submodular Maximization under Knapsack Constraint [0.0]
This work proposes an efficient parallel algorithm for non-monomodular size under a knapsack constraint.
Our algorithm improves the existing parallel one from $8+epsilon$ to $7+epsilon$ with $O(log n)$ adaptive complexity.
arXiv Detail & Related papers (2024-09-06T17:17:52Z) - Practical Parallel Algorithms for Non-Monotone Submodular Maximization [20.13836086815112]
Submodular has found extensive applications in various domains within the field of artificial intelligence.
One of the parallelizability of a submodular algorithm is its adaptive complexity, which indicates the number of rounds where a number of queries to the objective function can be executed in parallel.
We propose the first algorithm with both provable approximation ratio and sub adaptive complexity for the problem of non-monotone submodepsular subject to a $k$-system.
arXiv Detail & Related papers (2023-08-21T11:48:34Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel [17.462968952951883]
Main algorithm is assembled from two components which may be of independent interest.
A variant of LINEARSEQ is shown to have adaptive complexity of $O(log(n))$ smaller than that of any previous algorithm in the literature.
arXiv Detail & Related papers (2021-11-15T17:10:40Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Submodular Maximization subject to a Knapsack Constraint: Combinatorial
Algorithms with Near-optimal Adaptive Complexity [13.416309759182635]
We obtain the first constant approximation for non-monotone submodular algorithms with near-optimal $O(log n)$ adaptive complexity.
Our algorithm asks $tildeO(n2)$ value queries, but can be modified to run with only $tildeO(n)$ instead.
This is also the first approach with sublinear adaptive complexity for the problem and yields comparable to the state-of-the-art even for special cases of cardinality constraints or monotone objectives.
arXiv Detail & Related papers (2021-02-16T18:15:51Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Linear-Time Algorithms for Adaptive Submodular Maximization [17.19443570570189]
First, we develop a well-studied adaptive submodular problem subject to a cardinality constraint.
Second, we introduce the concept of fully adaptive submodularity.
Our algorithm achieves a $frac1-1/e-epsilon4-2/e-2epsilon$ approximation ratio using only $O(nlogfrac1epsilon)$ number of function evaluations.
arXiv Detail & Related papers (2020-07-08T15:54:28Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.