Robust Adaptive Submodular Maximization
- URL: http://arxiv.org/abs/2107.11333v2
- Date: Mon, 26 Jul 2021 03:11:31 GMT
- Title: Robust Adaptive Submodular Maximization
- Authors: Shaojie Tang
- Abstract summary: We study two variants of adaptive submodular optimization problems, namely, worst-case adaptive submodular and robust submodular.
We develop an adaptive worst-case greedy policy that achieves a $frac1p+1$ constraint ratio against the optimal worst-case utility.
We also describe several applications of our theoretical results, including pool-base active learning submodular set cover and adaptive viral marketing.
- Score: 26.171841086317574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of existing studies on adaptive submodular optimization focus on the
average-case, i.e., their objective is to find a policy that maximizes the
expected utility over a known distribution of realizations. However, a policy
that has a good average-case performance may have very poor performance under
the worst-case realization. In this study, we propose to study two variants of
adaptive submodular optimization problems, namely, worst-case adaptive
submodular maximization and robust submodular maximization. The first problem
aims to find a policy that maximizes the worst-case utility and the latter one
aims to find a policy, if any, that achieves both near optimal average-case
utility and worst-case utility simultaneously. We introduce a new class of
stochastic functions, called \emph{worst-case submodular function}. For the
worst-case adaptive submodular maximization problem subject to a $p$-system
constraint, we develop an adaptive worst-case greedy policy that achieves a
$\frac{1}{p+1}$ approximation ratio against the optimal worst-case utility if
the utility function is worst-case submodular. For the robust adaptive
submodular maximization problem subject to a cardinality constraint, if the
utility function is both worst-case submodular and adaptive submodular, we
develop a hybrid adaptive policy that achieves an approximation close to
$1-e^{-\frac{1}{2}}$ under both worst case setting and average case setting
simultaneously. We also describe several applications of our theoretical
results, including pool-base active learning, stochastic submodular set cover
and adaptive viral marketing.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Towards Efficient Exact Optimization of Language Model Alignment [93.39181634597877]
Direct preference optimization (DPO) was proposed to directly optimize the policy from preference data.
We show that DPO derived based on the optimal solution of problem leads to a compromised mean-seeking approximation of the optimal solution in practice.
We propose efficient exact optimization (EXO) of the alignment objective.
arXiv Detail & Related papers (2024-02-01T18:51:54Z) - Non-monotone Sequential Submodular Maximization [13.619980548779687]
We aim to select and rank a group of $k$ items from a ground set $V$ such that the weighted assortment of $k$ is maximized.
The results of this research have implications in various fields, including recommendation systems and optimization.
arXiv Detail & Related papers (2023-08-16T19:32:29Z) - Balancing Utility and Fairness in Submodular Maximization (Technical
Report) [27.20340546154524]
We propose a new problem called emphBi Maxim Submodularization (BSM) to balance utility and fairness.
Since BSM is inapproximable within any constant factor, we focus on designing efficient instance-dependent approximation schemes.
arXiv Detail & Related papers (2022-11-02T09:38:08Z) - Group Equality in Adaptive Submodular Maximization [13.619980548779687]
We study the classic submodular problem subject to a group equality constraint under both non-adaptive and adaptive settings.
We develop the first constant approximation algorithm for this problem.
arXiv Detail & Related papers (2022-07-07T15:12:02Z) - Partial-Adaptive Submodular Maximization [28.24164217929491]
Most studies on adaptive submodular problem focus on fully adaptive setting.
In this paper, we explore the problem of partial-adaptive submodular where one is allowed to make multiple selections in a batch simultaneously.
Our approach enjoys the benefits of adaptivity while reducing the time spent on waiting for the observations from past selections.
arXiv Detail & Related papers (2021-11-01T14:48:41Z) - Bayesian Optimization for Min Max Optimization [77.60508571062958]
We propose algorithms that perform Min Max Optimization in a setting where the function that should be optimized is not known a priori.
We extend the two acquisition functions Expected Improvement and Gaussian Process Upper Confidence Bound.
We show that these acquisition functions allow for better solutions - converging faster to the optimum than the benchmark settings.
arXiv Detail & Related papers (2021-07-29T06:49:34Z) - Beyond Pointwise Submodularity: Non-Monotone Adaptive Submodular
Maximization subject to a Knapsack Constraint [26.171841086317574]
We study the non-monotone adaptive submodular problem subject to a knapsack constraint.
The state of an item is initially unknown, one must select an item in order to reveal the state of that item.
Our main contribution is to develop a sampling-based randomized algorithm that achieves a $frac1$ approximation for maximizing an adaptive submodular function.
arXiv Detail & Related papers (2021-04-10T20:11:11Z) - Planning with Submodular Objective Functions [118.0376288522372]
We study planning with submodular objective functions, where instead of maximizing cumulative reward, the goal is to maximize the value induced by a submodular function.
Our framework subsumes standard planning and submodular objective with cardinality constraints as special cases.
arXiv Detail & Related papers (2020-10-22T16:55:12Z) - Continuous Submodular Function Maximization [91.17492610120324]
Continuous submodularity is a class of functions with a wide spectrum of applications.
We identify several applications of continuous submodular optimization, ranging from influence, MAP for inferences to inferences to field field.
arXiv Detail & Related papers (2020-06-24T04:37:31Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.