Breeding Diverse Packings for the Knapsack Problem by Means of
Diversity-Tailored Evolutionary Algorithms
- URL: http://arxiv.org/abs/2104.13133v1
- Date: Tue, 27 Apr 2021 12:26:18 GMT
- Title: Breeding Diverse Packings for the Knapsack Problem by Means of
Diversity-Tailored Evolutionary Algorithms
- Authors: Jakob Bossek, Aneta Neumann, Frank Neumann
- Abstract summary: We study evolutionary diversity optimization for the knapsack problem (KP)
Our goal is to evolve a population of solutions that all have a profit of at least $(mu+1)$-EA with initial approximate solutions calculated by a well-known FPTAS for the KP.
- Score: 13.026567958569965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In practise, it is often desirable to provide the decision-maker with a rich
set of diverse solutions of decent quality instead of just a single solution.
In this paper we study evolutionary diversity optimization for the knapsack
problem (KP). Our goal is to evolve a population of solutions that all have a
profit of at least $(1-\varepsilon)\cdot OPT$, where OPT is the value of an
optimal solution. Furthermore, they should differ in structure with respect to
an entropy-based diversity measure. To this end we propose a simple
$(\mu+1)$-EA with initial approximate solutions calculated by a well-known
FPTAS for the KP. We investigate the effect of different standard mutation
operators and introduce biased mutation and crossover which puts strong
probability on flipping bits of low and/or high frequency within the
population. An experimental study on different instances and settings shows
that the proposed mutation operators in most cases perform slightly inferior in
the long term, but show strong benefits if the number of function evaluations
is severely limited.
Related papers
- Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Evolutionary Multi-Objective Algorithms for the Knapsack Problems with
Stochastic Profits [13.026567958569965]
We consider a version of the knapsack problem with profits to guarantee a certain level of confidence in the profit of an item.
We introduce multi-objective formulations of the profit chance constrained knapsack problem and design three bi-objective fitness evaluation methods.
We show the effectiveness of our approaches on several benchmarks for both settings.
arXiv Detail & Related papers (2023-03-03T03:28:51Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Evolutionary Algorithms for Limiting the Effect of Uncertainty for the
Knapsack Problem with Stochastic Profits [14.352521012951865]
We consider the knapsack problem where the profits involve uncertainties.
We introduce different ways of dealing with profits based on tail inequalities that allow to limit the impact of uncertainties.
arXiv Detail & Related papers (2022-04-12T07:56:50Z) - Instance-optimality in optimal value estimation: Adaptivity via
variance-reduced Q-learning [99.34907092347733]
We analyze the problem of estimating optimal $Q$-value functions for a discounted Markov decision process with discrete states and actions.
Using a local minimax framework, we show that this functional arises in lower bounds on the accuracy on any estimation procedure.
In the other direction, we establish the sharpness of our lower bounds, up to factors logarithmic in the state and action spaces, by analyzing a variance-reduced version of $Q$-learning.
arXiv Detail & Related papers (2021-06-28T00:38:54Z) - Analysis of Evolutionary Diversity Optimisation for Permutation Problems [17.268191781480745]
Investigation on evolutionary diversity optimization for three of the most well-studied permutation problems.
Results show many mutation operators for these problems guarantee convergence to maximally diverse populations.
Experiments are carried out on QAPLIB and synthetic instances in unconstrained and constrained settings.
arXiv Detail & Related papers (2021-02-23T03:13:26Z) - Optimal Off-Policy Evaluation from Multiple Logging Policies [77.62012545592233]
We study off-policy evaluation from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling.
We find the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one.
arXiv Detail & Related papers (2020-10-21T13:43:48Z) - Evolutionary Diversity Optimization and the Minimum Spanning Tree
Problem [13.264683014487376]
We study the well-known Minimum Spanning Tree problem (MST) in the context of diversity optimization.
We show that a simple $(mu+1)$-EA can effectively compute a diversified population of spanning trees of high quality.
arXiv Detail & Related papers (2020-10-21T11:50:49Z) - Isotonic regression with unknown permutations: Statistics, computation,
and adaptation [13.96377843988598]
We study the minimax risk of estimation (in empirical $L$ loss) and the fundamental limits of adaptation (quantified by the adaptivity index)
We provide a Mirsky partition estimator that is minimax optimal while also achieving the smallest adaptivity index possible for vanilla time procedures.
In a complementary direction, we show that natural modifications of existing estimators fail to satisfy at least one of the desiderata optimal worst-case statistical performance, computational efficiency, and fast adaptation.
arXiv Detail & Related papers (2020-09-05T22:17:51Z) - SGD with shuffling: optimal rates without component convexity and large
epoch requirements [60.65928290219793]
We consider the RandomShuffle (shuffle at the beginning of each epoch) and SingleShuffle (shuffle only once)
We establish minimax optimal convergence rates of these algorithms up to poly-log factor gaps.
We further sharpen the tight convergence results for RandomShuffle by removing the drawbacks common to all prior arts.
arXiv Detail & Related papers (2020-06-12T05:00:44Z) - Statistical Efficiency of Thompson Sampling for Combinatorial
Semi-Bandits [56.31950477139053]
We investigate multi-armed bandit with semi-bandit feedback (CMAB)
We analyze variants of the Combinatorial Thompson Sampling policy (CTS)
This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB)
arXiv Detail & Related papers (2020-06-11T17:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.