Self-adjusting optimization algorithm for solving the setunion knapsack
problem
- URL: http://arxiv.org/abs/2202.05698v1
- Date: Sun, 23 Jan 2022 14:15:49 GMT
- Title: Self-adjusting optimization algorithm for solving the setunion knapsack
problem
- Authors: Congcong Wu, Xiangyun Gao, Xueyong Liu, Bowen Sun
- Abstract summary: The set-union knapsack problem (SUKP) is a constrained composed optimization problem.
We present two self-adjusting optimization algorithms for approximating SUKP from items and elements perspective respectively.
- Score: 0.3128201162068913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The set-union knapsack problem (SUKP) is a constrained composed optimization
problem. It is more difficulty for solving because values and weights depend on
items and elements respectively. In this paper, we present two self-adjusting
optimization algorithms for approximating SUKP from items and elements
perspective respectively. By analyzing the dynamic characters in the SUKP, we
design two types of self-adjusting repair and optimization operators that are
based on the different loading process. We use the novel
teaching-learning-based optimization algorithm (TLBO) to design a general
discrete framework (DTLBO) suitable for these two types of operators. In
addition, we introduce elite opposite search and natural selection mechanism
into DTLBO to furtherly improve the performance of the algorithm from the
perspective of population. Finally, we performed experimental comparisons on
benchmark sets to verify the effectiveness of the proposed algorithm. The
experimental results show that the item-based self-adjusting optimization
algorithm I-DTLBO is outstanding, and the algorithm is superior to the other
swarm intelligence methods for solving SUKP. IDTLBO algorithm reaches the upper
boundary of the current swarm intelligence algorithms for solving SUKP in 10
instances, and gotten new upper boundary in 15 instances. The algorithm E-DTLBO
based on element loading only perform slightly better on small and middle data
sets, but worse on large-scale instances. It shows that element-based design is
not suitable for solving SUKP.
Related papers
- A Pure Quantum Approximate Optimization Algorithm Based on CNR Operation [0.0]
We propose a general-purpose pure quantum approximate optimization algorithm.
The algorithm is constructed to a $p$-level divide-and-conquer structure.
We show the algorithm performance in detail when the required qubits number of the two optimization problems is 10.
arXiv Detail & Related papers (2023-10-27T06:54:39Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Decentralized Stochastic Bilevel Optimization with Improved
per-Iteration Complexity [17.870370505179014]
We propose a novel decentralized bilevel optimization (DSBO) algorithm that only requires first order oracle, Hessian-vector product and Jacobian-vector product.
The advantage of our algorithm is that it does not require estimating the full Hessian and Jacobian matrices, thereby having improved per-it complexity.
arXiv Detail & Related papers (2022-10-23T20:06:05Z) - Finding and Exploring Promising Search Space for the 0-1 Multidimensional Knapsack Problem [18.19836641663039]
We propose a novel algorithm combining evolutionary computation with the exact algorithm to solve the 0-1 MKP.
It maintains a set of solutions and utilizes the information from the population to extract good partial assignments.
It finds better solutions than the existing algorithms and provides new lower bounds for 10 large and hard instances.
arXiv Detail & Related papers (2022-10-08T05:11:47Z) - A Fully Single Loop Algorithm for Bilevel Optimization without Hessian
Inverse [121.54116938140754]
We propose a new Hessian inverse free Fully Single Loop Algorithm for bilevel optimization problems.
We show that our algorithm converges with the rate of $O(epsilon-2)$.
arXiv Detail & Related papers (2021-12-09T02:27:52Z) - Bayesian Optimization over Permutation Spaces [30.650753803587794]
We propose and evaluate two algorithms for BO over Permutation Spaces (BOPS)
We theoretically analyze the performance of BOPS-T to show that their regret grows sub-linearly.
Our experiments on multiple synthetic and real-world benchmarks show that both BOPS-T and BOPS-H perform better than the state-of-the-art BO algorithm for spaces.
arXiv Detail & Related papers (2021-12-02T08:20:50Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.