Optimising Monotone Chance-Constrained Submodular Functions Using
Evolutionary Multi-Objective Algorithms
- URL: http://arxiv.org/abs/2006.11444v1
- Date: Sat, 20 Jun 2020 00:17:44 GMT
- Title: Optimising Monotone Chance-Constrained Submodular Functions Using
Evolutionary Multi-Objective Algorithms
- Authors: Aneta Neumann and Frank Neumann
- Abstract summary: We present a first runtime analysis of evolutionary multi-objective algorithms for chance-constrained submodular functions.
We show that the GSEMO algorithm obtains the same worst case performance guarantees as recently analyzed greedy algorithms.
- Score: 15.389164392812276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world optimisation problems can be stated in terms of submodular
functions. A lot of evolutionary multi-objective algorithms have recently been
analyzed and applied to submodular problems with different types of
constraints. We present a first runtime analysis of evolutionary
multi-objective algorithms for chance-constrained submodular functions. Here,
the constraint involves stochastic components and the constraint can only be
violated with a small probability of alpha. We show that the GSEMO algorithm
obtains the same worst case performance guarantees as recently analyzed greedy
algorithms. Furthermore, we investigate the behavior of evolutionary
multi-objective algorithms such as GSEMO and NSGA-II on different submodular
chance constrained network problems. Our experimental results show that this
leads to significant performance improvements compared to the greedy algorithm.
Related papers
- Non-linear Multi-objective Optimization with Probabilistic Branch and Bound [6.305913808037513]
A multiple objective simulation optimization algorithm named Multiple Objective Probabilistic Branch and Bound with Single Observation (MOPBnB(so)) is presented.<n>Results reveal that the variant with multiple replications is extremely intensive in terms of computational resources compared to MOPBnB(so)<n>In addition, numerical results show that MOPBnB(so) outperforms a genetic algorithm NSGA-II on test problems.
arXiv Detail & Related papers (2025-06-05T02:01:08Z) - A Comparison-Relationship-Surrogate Evolutionary Algorithm for Multi-Objective Optimization [0.0]
We propose a new evolutionary algorithm "CRSEA" which uses the comparison-relationship model.
We find that CRSEA finds better converged solutions than the tested SAEAs on many medium-scale, biobjective problems.
arXiv Detail & Related papers (2025-04-28T01:39:38Z) - Archive-based Single-Objective Evolutionary Algorithms for Submodular Optimization [9.852567834643288]
We introduce for the first time single-objective algorithms that are provably successful for different classes of constrained submodular problems.
Our algorithms are variants of the $(lambda)$-EA and $(+1)$-EA.
arXiv Detail & Related papers (2024-06-19T10:08:12Z) - UCB-driven Utility Function Search for Multi-objective Reinforcement Learning [75.11267478778295]
In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours.
We focus on the case of linear utility functions parameterised by weight vectors w.
We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process.
arXiv Detail & Related papers (2024-05-01T09:34:42Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables [11.310502327308575]
We study the scenario of components that are independent and normally distributed.
We introduce a multi-objective formulation of the problem which trades off the expected cost and its variance.
We prove that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem.
arXiv Detail & Related papers (2021-09-13T09:24:23Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Single- and Multi-Objective Evolutionary Algorithms for the Knapsack
Problem with Dynamically Changing Constraints [13.896724650508087]
We investigate single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem.
Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios.
arXiv Detail & Related papers (2020-04-27T03:50:24Z) - Self-Adjusting Evolutionary Algorithms for Multimodal Optimization [0.0]
Self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces.
We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms.
arXiv Detail & Related papers (2020-04-07T11:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.