Machine Learning Constructives and Local Searches for the Travelling
Salesman Problem
- URL: http://arxiv.org/abs/2108.00938v1
- Date: Mon, 2 Aug 2021 14:34:44 GMT
- Title: Machine Learning Constructives and Local Searches for the Travelling
Salesman Problem
- Authors: Tommaso Vitali, Umberto Junior Mele, Luca Maria Gambardella, Roberto
Montemanni
- Abstract summary: We present improvements to the computational weight of the original deep learning model.
The possibility of adding a local-search phase is explored to further improve performance.
- Score: 7.656272344163667
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The ML-Constructive heuristic is a recently presented method and the first
hybrid method capable of scaling up to real scale traveling salesman problems.
It combines machine learning techniques and classic optimization techniques. In
this paper we present improvements to the computational weight of the original
deep learning model. In addition, as simpler models reduce the execution time,
the possibility of adding a local-search phase is explored to further improve
performance. Experimental results corroborate the quality of the proposed
improvements.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - Large Language Models as Surrogate Models in Evolutionary Algorithms: A Preliminary Study [5.6787965501364335]
Surrogate-assisted selection is a core step in evolutionary algorithms to solve expensive optimization problems.
Traditionally, this has relied on conventional machine learning methods, leveraging historical evaluated evaluations to predict the performance of new solutions.
In this work, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training.
arXiv Detail & Related papers (2024-06-15T15:54:00Z) - Continual Learning with Weight Interpolation [4.689826327213979]
Continual learning requires models to adapt to new tasks while retaining knowledge from previous ones.
This paper proposes a novel approach to continual learning utilizing the weight consolidation method.
arXiv Detail & Related papers (2024-04-05T10:25:40Z) - PETScML: Second-order solvers for training regression problems in Scientific Machine Learning [0.22499166814992438]
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis.
We introduce a software built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional machine-learning techniques.
arXiv Detail & Related papers (2024-03-18T18:59:42Z) - Design Space Exploration of Approximate Computing Techniques with a
Reinforcement Learning Approach [49.42371633618761]
We propose an RL-based strategy to find approximate versions of an application that balance accuracy degradation and power and computation time reduction.
Our experimental results show a good trade-off between accuracy degradation and decreased power and computation time for some benchmarks.
arXiv Detail & Related papers (2023-12-29T09:10:40Z) - Adaptive Optimization Algorithms for Machine Learning [0.0]
Machine learning assumes a pivotal role in our data-driven world.
This thesis contributes novel insights, introduces new algorithms with improved convergence guarantees, and improves analyses of popular practical algorithms.
arXiv Detail & Related papers (2023-11-16T21:22:47Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - PrIU: A Provenance-Based Approach for Incrementally Updating Regression
Models [9.496524884855559]
This paper presents an efficient provenance-based approach, PrIU, for incrementally updating model parameters without sacrificing prediction accuracy.
We prove the correctness and convergence of the incrementally updated model parameters, and validate it experimentally.
Experimental results show that up to two orders of magnitude speed-ups can be achieved by PrIU-opt compared to simply retraining the model from scratch, yet obtaining highly similar models.
arXiv Detail & Related papers (2020-02-26T21:04:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.