Single- and Multi-Objective Evolutionary Algorithms for the Knapsack
Problem with Dynamically Changing Constraints
- URL: http://arxiv.org/abs/2004.12574v2
- Date: Sun, 5 Jun 2022 05:45:38 GMT
- Title: Single- and Multi-Objective Evolutionary Algorithms for the Knapsack
Problem with Dynamically Changing Constraints
- Authors: Vahid Roostapour, Aneta Neumann, Frank Neumann
- Abstract summary: We investigate single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem.
Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios.
- Score: 13.896724650508087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolutionary algorithms are bio-inspired algorithms that can easily adapt to
changing environments. Recent results in the area of runtime analysis have
pointed out that algorithms such as the (1+1)~EA and Global SEMO can
efficiently reoptimize linear functions under a dynamic uniform constraint.
Motivated by this study, we investigate single- and multi-objective baseline
evolutionary algorithms for the classical knapsack problem where the capacity
of the knapsack varies over time.
We establish different benchmark scenarios where the capacity changes every
$\tau$ iterations according to a uniform or normal distribution. Our
experimental investigations analyze the behavior of our algorithms in terms of
the magnitude of changes determined by parameters of the chosen distribution,
the frequency determined by $\tau$, and the class of knapsack instance under
consideration. Our results show that the multi-objective approaches using a
population that caters for dynamic changes have a clear advantage on many
benchmarks scenarios when the frequency of changes is not too high.
Furthermore, we demonstrate that the diversity mechanisms used in popular
evolutionary multi-objective algorithms such as NSGA-II and SPEA2 do not
necessarily result in better performance and even lead to inferior results
compared to our simple multi-objective approaches.
Related papers
- Using 3-Objective Evolutionary Algorithms for the Dynamic Chance Constrained Knapsack Problem [9.617143859697322]
We explore the use of 3-objective evolutionary algorithms for the chance constrained knapsack problem with dynamic constraints.
We introduce a 3-objective formulation that is able to deal with the dynamic components at the same time and is independent of the confidence level required for the constraint.
Our analysis highlights the advantages of the 3-objective formulation over the 2-objective formulation in addressing the dynamic chance constrained knapsack problem.
arXiv Detail & Related papers (2024-04-09T04:47:01Z) - Analysis of Quality Diversity Algorithms for the Knapsack Problem [14.12876643502492]
We apply the QD paradigm to simulate dynamic programming behaviours on knapsack problem.
We show that they are able to compute an optimal solution within expected pseudo-polynomial time.
arXiv Detail & Related papers (2022-07-28T12:15:33Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - Reproducibility and Baseline Reporting for Dynamic Multi-objective
Benchmark Problems [4.859986264602551]
This paper focuses on the simulation experiments for parameters of DMOPs.
A baseline schema for dynamic algorithm evaluation is introduced.
We can establish the minimum capability required of purpose-built dynamic algorithms to be useful.
arXiv Detail & Related papers (2022-04-08T15:50:17Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Optimizing Monotone Chance-Constrained Submodular Functions Using Evolutionary Multi-Objective Algorithms [11.807734722701786]
Here the constraint involves components and the constraint can only be violated with a small probability of alpha.
We show that the algorithm GSEMO obtains the same worst case performance guarantees for monotone submodular functions.
Our experimental results show that the use of evolutionary multi-objective algorithms leads to significant performance improvements.
arXiv Detail & Related papers (2020-06-20T00:17:44Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.