Optimising Monotone Chance-Constrained Submodular Functions Using
Evolutionary Multi-Objective Algorithms
- URL: http://arxiv.org/abs/2006.11444v1
- Date: Sat, 20 Jun 2020 00:17:44 GMT
- Title: Optimising Monotone Chance-Constrained Submodular Functions Using
Evolutionary Multi-Objective Algorithms
- Authors: Aneta Neumann and Frank Neumann
- Abstract summary: We present a first runtime analysis of evolutionary multi-objective algorithms for chance-constrained submodular functions.
We show that the GSEMO algorithm obtains the same worst case performance guarantees as recently analyzed greedy algorithms.
- Score: 15.389164392812276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world optimisation problems can be stated in terms of submodular
functions. A lot of evolutionary multi-objective algorithms have recently been
analyzed and applied to submodular problems with different types of
constraints. We present a first runtime analysis of evolutionary
multi-objective algorithms for chance-constrained submodular functions. Here,
the constraint involves stochastic components and the constraint can only be
violated with a small probability of alpha. We show that the GSEMO algorithm
obtains the same worst case performance guarantees as recently analyzed greedy
algorithms. Furthermore, we investigate the behavior of evolutionary
multi-objective algorithms such as GSEMO and NSGA-II on different submodular
chance constrained network problems. Our experimental results show that this
leads to significant performance improvements compared to the greedy algorithm.
Related papers
- Benchmarking Algorithms for Submodular Optimization Problems Using
IOHProfiler [22.08617448389877]
This paper introduces a setup for benchmarking algorithms for submodular optimization problems.
The focus is on the development of iterative search algorithms with the implementation provided and integrated into IOHprofiler.
We present a range of submodular optimization problems that have been integrated into IOHprofiler and show how the setup can be used for analyzing and comparing iterative search algorithms in various settings.
arXiv Detail & Related papers (2023-02-02T23:36:23Z) - Evolution is Still Good: Theoretical Analysis of Evolutionary Algorithms
on General Cover Problems [16.98107289469868]
Some approximation mechanism seems to be inherently embedded in many evolutionary algorithms.
We identify such a relation by proposing a unified analysis framework for a generalized simple multi-objective evolutionary algorithm.
arXiv Detail & Related papers (2022-10-03T01:25:53Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - An Application of a Multivariate Estimation of Distribution Algorithm to
Cancer Chemotherapy [59.40521061783166]
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints.
We show that the more sophisticated algorithm would yield better performance on a complex problem like this.
We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem.
arXiv Detail & Related papers (2022-05-17T15:28:46Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Maximizing Submodular or Monotone Functions under Partition Matroid
Constraints by Multi-objective Evolutionary Algorithms [16.691265882753346]
A simple evolutionary algorithm called GSEMO has been shown to achieve good approximation for submodular functions efficiently.
We extend theoretical results to partition matroid constraints which generalize cardinality constraints.
arXiv Detail & Related papers (2020-06-23T05:37:29Z) - Single- and Multi-Objective Evolutionary Algorithms for the Knapsack
Problem with Dynamically Changing Constraints [13.896724650508087]
We investigate single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem.
Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios.
arXiv Detail & Related papers (2020-04-27T03:50:24Z) - Self-Adjusting Evolutionary Algorithms for Multimodal Optimization [0.0]
Self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces.
We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms.
arXiv Detail & Related papers (2020-04-07T11:02:10Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.