Adaptive Cascade Submodular Maximization
- URL: http://arxiv.org/abs/2007.03592v2
- Date: Fri, 12 Feb 2021 21:57:43 GMT
- Title: Adaptive Cascade Submodular Maximization
- Authors: Shaojie Tang and Jing Yuan
- Abstract summary: We study the cascade submodular problem under the adaptive setting.
Our objective is to identify the best sequence of selecting items so as to maximize the expected utility of the selected items.
- Score: 19.29174615532181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose and study the cascade submodular maximization
problem under the adaptive setting. The input of our problem is a set of items,
each item is in a particular state (i.e., the marginal contribution of an item)
which is drawn from a known probability distribution. However, we can not know
its actual state before selecting it. As compared with existing studies on
stochastic submodular maximization, one unique setting of our problem is that
each item is associated with a continuation probability which represents the
probability that one is allowed to continue to select the next item after
selecting the current one. Intuitively, this term captures the externality of
selecting one item to all its subsequent items in terms of the opportunity of
being selected. Therefore, the actual set of items that can be selected by a
policy depends on the specific ordering it adopts to select items, this makes
our problem fundamentally different from classical submodular set optimization
problems. Our objective is to identify the best sequence of selecting items so
as to maximize the expected utility of the selected items. We propose a class
of stochastic utility functions, \emph{adaptive cascade submodular functions},
and show that the objective functions in many practical application domains
satisfy adaptive cascade submodularity. Then we develop a $0.12$ approximation
algorithm to the adaptive cascade submodular maximization problem.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Non-monotone Sequential Submodular Maximization [13.619980548779687]
We aim to select and rank a group of $k$ items from a ground set $V$ such that the weighted assortment of $k$ is maximized.
The results of this research have implications in various fields, including recommendation systems and optimization.
arXiv Detail & Related papers (2023-08-16T19:32:29Z) - Streaming Adaptive Submodular Maximization [19.29174615532181]
We introduce a new class of utility functions, semi-policywise submodular functions.
We develop a series of effective algorithms to maximize a semi-policywise submodular function under the stream-based setting.
arXiv Detail & Related papers (2022-08-17T02:05:10Z) - Group Equality in Adaptive Submodular Maximization [13.619980548779687]
We study the classic submodular problem subject to a group equality constraint under both non-adaptive and adaptive settings.
We develop the first constant approximation algorithm for this problem.
arXiv Detail & Related papers (2022-07-07T15:12:02Z) - Fast Feature Selection with Fairness Constraints [49.142308856826396]
We study the fundamental problem of selecting optimal features for model construction.
This problem is computationally challenging on large datasets, even with the use of greedy algorithm variants.
We extend the adaptive query model, recently proposed for the greedy forward selection for submodular functions, to the faster paradigm of Orthogonal Matching Pursuit for non-submodular functions.
The proposed algorithm achieves exponentially fast parallel run time in the adaptive query model, scaling much better than prior work.
arXiv Detail & Related papers (2022-02-28T12:26:47Z) - Partial-Adaptive Submodular Maximization [28.24164217929491]
Most studies on adaptive submodular problem focus on fully adaptive setting.
In this paper, we explore the problem of partial-adaptive submodular where one is allowed to make multiple selections in a batch simultaneously.
Our approach enjoys the benefits of adaptivity while reducing the time spent on waiting for the observations from past selections.
arXiv Detail & Related papers (2021-11-01T14:48:41Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization subject to a Knapsack Constraint [26.171841086317574]
We study the non-monotone adaptive submodular problem subject to a knapsack constraint.
The state of an item is initially unknown, one must select an item in order to reveal the state of that item.
Our main contribution is to develop a sampling-based randomized algorithm that achieves a $frac1$ approximation for maximizing an adaptive submodular function.
arXiv Detail & Related papers (2021-04-10T20:11:11Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.