Evolutionary Algorithms for Limiting the Effect of Uncertainty for the
Knapsack Problem with Stochastic Profits
- URL: http://arxiv.org/abs/2204.05597v1
- Date: Tue, 12 Apr 2022 07:56:50 GMT
- Title: Evolutionary Algorithms for Limiting the Effect of Uncertainty for the
Knapsack Problem with Stochastic Profits
- Authors: Aneta Neumann, Yue Xie, Frank Neumann
- Abstract summary: We consider the knapsack problem where the profits involve uncertainties.
We introduce different ways of dealing with profits based on tail inequalities that allow to limit the impact of uncertainties.
- Score: 14.352521012951865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Evolutionary algorithms have been widely used for a range of stochastic
optimization problems in order to address complex real-world optimization
problems. We consider the knapsack problem where the profits involve
uncertainties. Such a stochastic setting reflects important real-world
scenarios where the profit that can be realized is uncertain. We introduce
different ways of dealing with stochastic profits based on tail inequalities
such as Chebyshev's inequality and Hoeffding bounds that allow to limit the
impact of uncertainties. We examine simple evolutionary algorithms and the use
of heavy tail mutation and a problem-specific crossover operator for optimizing
uncertain profits. Our experimental investigations on different benchmarks
instances show the results of different approaches based on tail inequalities
as well as improvements achievable through heavy tail mutation and the problem
specific crossover operator.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - A Guide to Stochastic Optimisation for Large-Scale Inverse Problems [4.926711494319977]
optimisation algorithms are the de facto standard for machine learning with large amounts of data.
We provide a comprehensive account of the state-of-the-art in optimisation from the viewpoint of inverse problems.
We focus on the challenges for optimisation that are unique and are not commonly encountered in machine learning.
arXiv Detail & Related papers (2024-06-10T15:02:30Z) - Evolving Reliable Differentiating Constraints for the Chance-constrained Maximum Coverage Problem [8.98161858972433]
We study the classical maximum coverage problem in graphs with chance constraints.
Our goal is to evolve reliable chance constraint settings for a given graph where the performance of algorithms differs significantly.
We develop an evolutionary algorithm that provides sets of chance constraints that differentiate the performance of two search algorithms with high confidence.
arXiv Detail & Related papers (2024-05-29T05:22:31Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Evolutionary Multi-Objective Algorithms for the Knapsack Problems with
Stochastic Profits [13.026567958569965]
We consider a version of the knapsack problem with profits to guarantee a certain level of confidence in the profit of an item.
We introduce multi-objective formulations of the profit chance constrained knapsack problem and design three bi-objective fitness evaluation methods.
We show the effectiveness of our approaches on several benchmarks for both settings.
arXiv Detail & Related papers (2023-03-03T03:28:51Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Distributed Stochastic Optimization under a General Variance Condition [13.911633636387059]
Distributed optimization has drawn great attention recently due to its effectiveness in solving largescale machine learning problems.
We revisit the classical Federated Averaging (Avg) and establish the convergence results under only a mild variance for smooth non objective functions.
Almost a stationary convergence point is also established under the gradients condition.
arXiv Detail & Related papers (2023-01-30T05:48:09Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Breeding Diverse Packings for the Knapsack Problem by Means of
Diversity-Tailored Evolutionary Algorithms [13.026567958569965]
We study evolutionary diversity optimization for the knapsack problem (KP)
Our goal is to evolve a population of solutions that all have a profit of at least $(mu+1)$-EA with initial approximate solutions calculated by a well-known FPTAS for the KP.
arXiv Detail & Related papers (2021-04-27T12:26:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.