Unveiling the Limits of Learned Local Search Heuristics: Are You the
Mightiest of the Meek?
- URL: http://arxiv.org/abs/2310.19990v1
- Date: Mon, 30 Oct 2023 20:16:42 GMT
- Title: Unveiling the Limits of Learned Local Search Heuristics: Are You the
Mightiest of the Meek?
- Authors: Ankur Nath, Alan Kuhnle
- Abstract summary: We show that a simple learned based on Tabu Search surpasses state-of-the-art learneds in terms of performance and generalizability.
Our findings challenge prevailing assumptions and open up exciting avenues for future research.
- Score: 14.195843311387591
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, combining neural networks with local search heuristics has
become popular in the field of combinatorial optimization. Despite its
considerable computational demands, this approach has exhibited promising
outcomes with minimal manual engineering. However, we have identified three
critical limitations in the empirical evaluation of these integration attempts.
Firstly, instances with moderate complexity and weak baselines pose a challenge
in accurately evaluating the effectiveness of learning-based approaches.
Secondly, the absence of an ablation study makes it difficult to quantify and
attribute improvements accurately to the deep learning architecture. Lastly,
the generalization of learned heuristics across diverse distributions remains
underexplored. In this study, we conduct a comprehensive investigation into
these identified limitations. Surprisingly, we demonstrate that a simple
learned heuristic based on Tabu Search surpasses state-of-the-art (SOTA)
learned heuristics in terms of performance and generalizability. Our findings
challenge prevailing assumptions and open up exciting avenues for future
research and innovation in combinatorial optimization.
Related papers
- Interactive Ontology Matching with Cost-Efficient Learning [2.006461411375746]
This work introduces DualLoop, an active learning method tailored to matching.
Compared to existing active learning methods, we consistently achieved better F1 scores and recall.
We report our operational performance results within the Architecture, Engineering, Construction (AEC) industry sector.
arXiv Detail & Related papers (2024-04-11T11:53:14Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - GLUECons: A Generic Benchmark for Learning Under Constraints [102.78051169725455]
In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision.
We model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints.
arXiv Detail & Related papers (2023-02-16T16:45:36Z) - The rise of the lottery heroes: why zero-shot pruning is hard [3.1473798197405944]
Recent advances in deep learning optimization showed that just a subset of parameters are really necessary to successfully train a model.
Finding these trainable sub-networks is a typically costly process.
This inhibits practical applications: can the learned sub-graph structures in deep learning models be found at training time?
arXiv Detail & Related papers (2022-02-24T22:49:36Z) - Constrained Learning with Non-Convex Losses [119.8736858597118]
Though learning has become a core technology of modern information processing, there is now ample evidence that it can lead to biased, unsafe, and prejudiced solutions.
arXiv Detail & Related papers (2021-03-08T23:10:33Z) - Instance-Dependent Complexity of Contextual Bandits and Reinforcement
Learning: A Disagreement-Based Perspective [104.67295710363679]
In the classical multi-armed bandit problem, instance-dependent algorithms attain improved performance on "easy" problems with a gap between the best and second-best arm.
We introduce a family of complexity measures that are both sufficient and necessary to obtain instance-dependent regret bounds.
We then introduce new oracle-efficient algorithms which adapt to the gap whenever possible, while also attaining the minimax rate in the worst case.
arXiv Detail & Related papers (2020-10-07T01:33:06Z) - Topological Gradient-based Competitive Learning [1.6752712949948443]
This work is to present a novel comprehensive theory aspiring at bridging competitive learning with gradient-based learning.
We fully demonstrate the theoretical equivalence of two novel gradient-based competitive layers.
Preliminary experiments show how the dual approach, trained on the transpose of the input matrix, lead to faster convergence rate and higher training accuracy both in low and high-dimensional scenarios.
arXiv Detail & Related papers (2020-08-21T13:44:38Z) - Optimization and Generalization of Regularization-Based Continual
Learning: a Loss Approximation Viewpoint [35.5156045701898]
We provide a novel viewpoint of regularization-based continual learning by formulating it as a second-order Taylor approximation of the loss function of each task.
Based on this viewpoint, we study the optimization aspects (i.e., convergence) as well as generalization properties (i.e., finite-sample guarantees) of regularization-based continual learning.
arXiv Detail & Related papers (2020-06-19T06:08:40Z) - Reparameterized Variational Divergence Minimization for Stable Imitation [57.06909373038396]
We study the extent to which variations in the choice of probabilistic divergence may yield more performant ILO algorithms.
We contribute a re parameterization trick for adversarial imitation learning to alleviate the challenges of the promising $f$-divergence minimization framework.
Empirically, we demonstrate that our design choices allow for ILO algorithms that outperform baseline approaches and more closely match expert performance in low-dimensional continuous-control tasks.
arXiv Detail & Related papers (2020-06-18T19:04:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.