Evolutionary Bi-objective Optimization for the Dynamic
Chance-Constrained Knapsack Problem Based on Tail Bound Objectives
- URL: http://arxiv.org/abs/2002.06766v1
- Date: Mon, 17 Feb 2020 04:36:15 GMT
- Title: Evolutionary Bi-objective Optimization for the Dynamic
Chance-Constrained Knapsack Problem Based on Tail Bound Objectives
- Authors: Hirad Assimi, Oscar Harper, Yue Xie, Aneta Neumann, Frank Neumann
- Abstract summary: We consider the dynamic chance-constrained knapsack problem where the weight of each item is the subject of a capacity constraint.
The objective is to maximize the total profit subject to the probability that total weight exceeds the capacity.
We introduce an additional objective which estimates the minimal capacity bound for a given solution that still meets the chance constraint.
- Score: 12.634782111072585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world combinatorial optimization problems are often stochastic and
dynamic. Therefore, it is essential to make optimal and reliable decisions with
a holistic approach. In this paper, we consider the dynamic chance-constrained
knapsack problem where the weight of each item is stochastic, the capacity
constraint changes dynamically over time, and the objective is to maximize the
total profit subject to the probability that total weight exceeds the capacity.
We make use of prominent tail inequalities such as Chebyshev's inequality, and
Chernoff bound to approximate the probabilistic constraint. Our key
contribution is to introduce an additional objective which estimates the
minimal capacity bound for a given stochastic solution that still meets the
chance constraint. This objective helps to cater for dynamic changes to the
stochastic problem. We apply single- and multi-objective evolutionary
algorithms to the problem and show how bi-objective optimization can help to
deal with dynamic chance-constrained problems.
Related papers
- Stochastic Q-learning for Large Discrete Action Spaces [79.1700188160944]
In complex environments with discrete action spaces, effective decision-making is critical in reinforcement learning (RL)
We present value-based RL approaches which, as opposed to optimizing over the entire set of $n$ actions, only consider a variable set of actions, possibly as small as $mathcalO(log(n)$)$.
The presented value-based RL methods include, among others, Q-learning, StochDQN, StochDDQN, all of which integrate this approach for both value-function updates and action selection.
arXiv Detail & Related papers (2024-05-16T17:58:44Z) - Multi-Objective Evolutionary Algorithms with Sliding Window Selection for the Dynamic Chance-Constrained Knapsack Problem [2.5690340428649328]
We propose multi-objective evolutionary approaches for the knapsack problem with profits under static and dynamic weight constraints.
We consider a bi-objective formulation that maximises expected profit and minimises variance.
We derive a three-objective formulation by relaxing the weight constraint into an additional objective.
arXiv Detail & Related papers (2024-04-12T03:07:15Z) - Using 3-Objective Evolutionary Algorithms for the Dynamic Chance Constrained Knapsack Problem [9.617143859697322]
We explore the use of 3-objective evolutionary algorithms for the chance constrained knapsack problem with dynamic constraints.
We introduce a 3-objective formulation that is able to deal with the dynamic components at the same time and is independent of the confidence level required for the constraint.
Our analysis highlights the advantages of the 3-objective formulation over the 2-objective formulation in addressing the dynamic chance constrained knapsack problem.
arXiv Detail & Related papers (2024-04-09T04:47:01Z) - Addressing The Knapsack Challenge Through Cultural Algorithm
Optimization [0.0]
We introduce a novel variant of Cultural Algorithms tailored specifically for solving 0-1 knapsack problems.
Our proposed algorithm incorporates a belief space to refine the population and introduces two vital functions for dynamically adjusting the crossover and mutation rates.
We provide compelling evidence of the algorithm's remarkable efficiency in consistently locating the global optimum.
arXiv Detail & Related papers (2023-10-30T17:05:19Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables [11.310502327308575]
We study the scenario of components that are independent and normally distributed.
We introduce a multi-objective formulation of the problem which trades off the expected cost and its variance.
We prove that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem.
arXiv Detail & Related papers (2021-09-13T09:24:23Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Specific Single- and Multi-Objective Evolutionary Algorithms for the
Chance-Constrained Knapsack Problem [14.352521012951865]
We introduce a new effective multi-objective model for the chance-constrained knapsack problem.
Our experimental results show that this leads to significant performance improvements when using the approach in evolutionary multi-objective algorithms.
arXiv Detail & Related papers (2020-04-07T08:46:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.