Online and Streaming Algorithms for Constrained $k$-Submodular
Maximization
- URL: http://arxiv.org/abs/2305.16013v1
- Date: Thu, 25 May 2023 12:53:17 GMT
- Title: Online and Streaming Algorithms for Constrained $k$-Submodular
Maximization
- Authors: Fabian Spaeh, Alina Ene, Huy L. Nguyen
- Abstract summary: Constrained $k$-submodular is a general framework that captures many discrete optimization problems such as ad allocation, influence, personalized recommendation, and many others.
In this work, we develop single-pass streaming and online algorithms for $k$-submodular constrained with both monotone and general (possibly non-monotone) objectives.
- Score: 25.15934533974176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained $k$-submodular maximization is a general framework that captures
many discrete optimization problems such as ad allocation, influence
maximization, personalized recommendation, and many others. In many of these
applications, datasets are large or decisions need to be made in an online
manner, which motivates the development of efficient streaming and online
algorithms. In this work, we develop single-pass streaming and online
algorithms for constrained $k$-submodular maximization with both monotone and
general (possibly non-monotone) objectives subject to cardinality and knapsack
constraints. Our algorithms achieve provable constant-factor approximation
guarantees which improve upon the state of the art in almost all settings.
Moreover, they are combinatorial and very efficient, and have optimal space and
running time. We experimentally evaluate our algorithms on instances for ad
allocation and other applications, where we observe that our algorithms are
efficient and scalable, and construct solutions that are comparable in value to
offline greedy algorithms.
Related papers
- Improved Parallel Algorithm for Non-Monotone Submodular Maximization under Knapsack Constraint [0.0]
This work proposes an efficient parallel algorithm for non-monomodular size under a knapsack constraint.
Our algorithm improves the existing parallel one from $8+epsilon$ to $7+epsilon$ with $O(log n)$ adaptive complexity.
arXiv Detail & Related papers (2024-09-06T17:17:52Z) - Practical Parallel Algorithms for Non-Monotone Submodular Maximization [20.13836086815112]
Submodular has found extensive applications in various domains within the field of artificial intelligence.
One of the parallelizability of a submodular algorithm is its adaptive complexity, which indicates the number of rounds where a number of queries to the objective function can be executed in parallel.
We propose the first algorithm with both provable approximation ratio and sub adaptive complexity for the problem of non-monotone submodepsular subject to a $k$-system.
arXiv Detail & Related papers (2023-08-21T11:48:34Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Instance Specific Approximations for Submodular Maximization [45.91235224228292]
We look for methods to benchmark the performance of algorithms against optimal solutions on real-world instances.
A major question is how to measure the performance of an algorithm in comparison to an optimal solution on instances we encounter in practice.
Our main contribution is not a new algorithm for submodular minimization but an analytical method that measures how close an algorithm for submodular instance is to optimal.
arXiv Detail & Related papers (2021-02-23T19:39:32Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.