Features for the 0-1 knapsack problem based on inclusionwise maximal
solutions
- URL: http://arxiv.org/abs/2211.09665v1
- Date: Wed, 16 Nov 2022 12:48:35 GMT
- Title: Features for the 0-1 knapsack problem based on inclusionwise maximal
solutions
- Authors: Jorik Jooken, Pieter Leyman and Patrick De Causmaecker
- Abstract summary: We formulate several new computationally challenging problems related to the IMSs of arbitrary 0-1 knapsack problem instances.
We derive a set of 14 computationally expensive features, which we calculate for two large datasets on a supercomputer in approximately 540 CPU-hours.
We show that the proposed features contain important information related to the empirical hardness of a problem instance.
- Score: 0.7734726150561086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decades of research on the 0-1 knapsack problem led to very efficient
algorithms that are able to quickly solve large problem instances to
optimality. This prompted researchers to also investigate whether relatively
small problem instances exist that are hard for existing solvers and
investigate which features characterize their hardness. Previously the authors
proposed a new class of hard 0-1 knapsack problem instances and demonstrated
that the properties of so-called inclusionwise maximal solutions (IMSs) can be
important hardness indicators for this class. In the current paper, we
formulate several new computationally challenging problems related to the IMSs
of arbitrary 0-1 knapsack problem instances. Based on generalizations of
previous work and new structural results about IMSs, we formulate polynomial
and pseudopolynomial time algorithms for solving these problems. From this we
derive a set of 14 computationally expensive features, which we calculate for
two large datasets on a supercomputer in approximately 540 CPU-hours. We show
that the proposed features contain important information related to the
empirical hardness of a problem instance that was missing in earlier features
from the literature by training machine learning models that can accurately
predict the empirical hardness of a wide variety of 0-1 knapsack problem
instances. Using the instance space analysis methodology, we also show that
hard 0-1 knapsack problem instances are clustered together around a relatively
dense region of the instance space and several features behave differently in
the easy and hard parts of the instance space.
Related papers
- Easy2Hard-Bench: Standardized Difficulty Labels for Profiling LLM Performance and Generalization [126.27645170941268]
We present Easy2Hard-Bench, a collection of 6 benchmark datasets spanning various domains.
Each problem within these datasets is annotated with numerical difficulty scores.
We provide a comprehensive analysis of their performance and generalization capabilities across varying levels of difficulty.
arXiv Detail & Related papers (2024-09-27T03:49:56Z) - Sparse Representations, Inference and Learning [0.0]
We will present a general framework that can be used in a large variety of problems with weak long-range interactions.
We shall see how these problems can be studied at the replica symmetric level, using developments of the cavity methods.
arXiv Detail & Related papers (2023-06-28T10:58:27Z) - Preliminary Results on Using Abstract AND-OR Graphs for Generalized
Solving of Stochastic Shortest Path Problems [25.152899734616298]
Shortest Path Problems (SSPs) are goal-oriented problems in the real-world.
A key difficulty for computing solutions for SSPs is finding solutions to even moderately sized problems intractable.
We show that our approach can be embedded in any SSP solver to compute hierarchically optimal policies.
arXiv Detail & Related papers (2022-04-08T21:30:47Z) - Provable Reinforcement Learning with a Short-Term Memory [68.00677878812908]
We study a new subclass of POMDPs, whose latent states can be decoded by the most recent history of a short length $m$.
In particular, in the rich-observation setting, we develop new algorithms using a novel "moment matching" approach with a sample complexity that scales exponentially.
Our results show that a short-term memory suffices for reinforcement learning in these environments.
arXiv Detail & Related papers (2022-02-08T16:39:57Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Runtime Analysis of RLS and the (1+1) EA for the Chance-constrained
Knapsack Problem with Correlated Uniform Weights [15.402666674186936]
We perform runtime analysis of a randomized search algorithm (RSA) and a basic evolutionary algorithm (EA) for the chance-constrained knapsack problem with correlated uniform weights.
We show how the weight correlations and the different types of profit profiles influence the runtime behavior of both algorithms in the chance-constrained setting.
arXiv Detail & Related papers (2021-02-10T23:40:01Z) - Unravelling Small Sample Size Problems in the Deep Learning World [69.82853912238173]
We first present a review of deep learning algorithms for small sample size problems in which the algorithms are segregated according to the space in which they operate.
Secondly, we present Dynamic Attention Pooling approach which focuses on extracting global information from the most discriminative sub-part of the feature map.
arXiv Detail & Related papers (2020-08-08T13:35:49Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Sample-Efficient Reinforcement Learning of Undercomplete POMDPs [91.40308354344505]
This work shows that these hardness barriers do not preclude efficient reinforcement learning for rich and interesting subclasses of Partially Observable Decision Processes (POMDPs)
We present a sample-efficient algorithm, OOM-UCB, for episodic finite undercomplete POMDPs, where the number of observations is larger than the number of latent states and where exploration is essential for learning, thus distinguishing our results from prior works.
arXiv Detail & Related papers (2020-06-22T17:58:54Z) - The Backbone Method for Ultra-High Dimensional Sparse Machine Learning [3.7565501074323224]
We present the backbone method, a generic framework that enables sparse and interpretable supervised machine learning methods to scale to ultra-high dimensional problems.
We solve sparse regression problems with $107$ features in minutes and $108$ features in hours, as well as decision tree problems with $105$ features in minutes.
arXiv Detail & Related papers (2020-06-11T16:43:02Z) - Generalization of Machine Learning for Problem Reduction: A Case Study
on Travelling Salesman Problems [5.370742369840518]
We show that a machine learning model can greedily remove decision variables from an optimization problem that are predicted not to be part of an optimal solution.
We consider three scenarios where training and test instances are different in terms of: 1) problem characteristics; 2) problem sizes; and 3) problem types.
While the accuracy of predicting unused variables naturally deteriorates as a test instance is further away from the training set, we observe that even when tested on a different TSP problem, the machine learning model still makes useful predictions.
arXiv Detail & Related papers (2020-05-12T15:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.