Robust Subset Selection by Greedy and Evolutionary Pareto Optimization
- URL: http://arxiv.org/abs/2205.01415v2
- Date: Sun, 8 May 2022 12:52:44 GMT
- Title: Robust Subset Selection by Greedy and Evolutionary Pareto Optimization
- Authors: Chao Bian, Yawen Zhou, Chao Qian
- Abstract summary: Subset selection aims to select a subset from a ground set to maximize some objective function.
We show that a greedy algorithm can obtain an approximation ratio of $1-e-betagamma$, where $beta$ and $gamma$ are the correlation and submodularity ratios of the objective functions.
- Score: 23.0838604893412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subset selection, which aims to select a subset from a ground set to maximize
some objective function, arises in various applications such as influence
maximization and sensor placement. In real-world scenarios, however, one often
needs to find a subset which is robust against (i.e., is good over) a number of
possible objective functions due to uncertainty, resulting in the problem of
robust subset selection. This paper considers robust subset selection with
monotone objective functions, relaxing the submodular property required by
previous studies. We first show that the greedy algorithm can obtain an
approximation ratio of $1-e^{-\beta\gamma}$, where $\beta$ and $\gamma$ are the
correlation and submodularity ratios of the objective functions, respectively;
and then propose EPORSS, an evolutionary Pareto optimization algorithm that can
utilize more time to find better subsets. We prove that EPORSS can also be
theoretically grounded, achieving a similar approximation guarantee to the
greedy algorithm. In addition, we derive the lower bound of $\beta$ for the
application of robust influence maximization, and further conduct experiments
to validate the performance of the greedy algorithm and EPORSS.
Related papers
- Fair Submodular Cover [18.37610521373708]
We present the study of Fair Submodular Cover (FSC), where given a ground set $U$, a monotone submodular function $f:2UtomathbbR_ge 0$, a threshold $tau$.
We first introduce discrete algorithms for FSC that achieve a bicriteria approximation ratio of $(frac1epsilon, 1-O(epsilon))$.
We then present a continuous algorithm that achieves a $(frac1epsilon, 1-O(epsilon))$-
arXiv Detail & Related papers (2024-07-05T18:37:09Z) - Column and row subset selection using nuclear scores: algorithms and theory for Nyström approximation, CUR decomposition, and graph Laplacian reduction [0.0]
We develop unified methodologies for fast, efficient, and theoretically guaranteed column selection.
First, we derive and implement a sparsity-exploiting deterministic algorithm applicable to tasks including kernel approximation and CUR decomposition.
Next, we develop a matrix-free formalism relying on a randomization scheme satisfying guaranteed concentration bounds.
arXiv Detail & Related papers (2024-07-01T18:10:19Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Composable Coresets for Determinant Maximization: Greedy is Almost
Optimal [2.849878266188877]
Given a set of $n$ in $mathbbRd$, the goal of the emphdeterminant problem is to pick $k$ vectors with the maximum volume.
We show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of $O(k)3k$.
arXiv Detail & Related papers (2023-09-26T21:46:44Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Bayesian Optimization of Risk Measures [7.799648230758491]
We consider Bayesian optimization of objective functions of the form $rho[ F(x, W) ]$, where $F$ is a black-box expensive-to-evaluate function.
We propose a family of novel Bayesian optimization algorithms that exploit the structure of the objective function to substantially improve sampling efficiency.
arXiv Detail & Related papers (2020-07-10T18:20:46Z) - Differentiable Greedy Submodular Maximization: Guarantees, Gradient
Estimators, and Applications [15.191184049312467]
We establish a theoretically guaranteed versatile framework that makes the greedy algorithm for monotone subular function differentiable.
We smooth the greedy algorithm via randomization, and prove that it almost recovers original approximation guarantees in expectation for the cases of cardinality and $kappa$-extensible system constrains.
arXiv Detail & Related papers (2020-05-06T03:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.