Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning
- URL: http://arxiv.org/abs/2301.03116v1
- Date: Sun, 8 Jan 2023 22:14:59 GMT
- Title: Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning
- Authors: Haoyu Wang, Pan Li
- Abstract summary: A general framework of unsupervised learning for optimization (CO) is to train a neural network (NN) whose output gives a problem solution by directly optimizing the CO objective.
We propose a new objective of unsupervised learning for CO where the goal of learning is to search for good initialization for future problem instances rather than give direct solutions.
We observe that even just the initial solution given by our model before fine-tuning can significantly outperform the baselines under various evaluation settings.
- Score: 14.86600327306136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A general framework of unsupervised learning for combinatorial optimization
(CO) is to train a neural network (NN) whose output gives a problem solution by
directly optimizing the CO objective. Albeit with some advantages over
traditional solvers, the current framework optimizes an averaged performance
over the distribution of historical problem instances, which misaligns with the
actual goal of CO that looks for a good solution to every future encountered
instance. With this observation, we propose a new objective of unsupervised
learning for CO where the goal of learning is to search for good initialization
for future problem instances rather than give direct solutions. We propose a
meta-learning-based training pipeline for this new objective. Our method
achieves go empirical performance. We observe that even just the initial
solution given by our model before fine-tuning can significantly outperform the
baselines under various evaluation settings including evaluation across
multiple datasets, and the case with big shifts in the problem scale. The
reason we conjecture is that meta-learning-based training lets the model
loosely tied to each local optima for a training instance while being more
adaptive to the changes of optimization landscapes across instances.
Related papers
- Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - From Function to Distribution Modeling: A PAC-Generative Approach to
Offline Optimization [30.689032197123755]
This paper considers the problem of offline optimization, where the objective function is unknown except for a collection of offline" data examples.
Instead of learning and then optimizing the unknown objective function, we take on a less intuitive but more direct view that optimization can be thought of as a process of sampling from a generative model.
arXiv Detail & Related papers (2024-01-04T01:32:50Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Unsupervised Learning for Combinatorial Optimization with Principled
Objective Relaxation [19.582494782591386]
This work proposes an unsupervised learning framework for optimization (CO) problems.
Our key contribution is the observation that if the relaxed objective satisfies entry-wise concavity, a low optimization loss guarantees the quality of the final integral solutions.
In particular, this observation can guide the design of objective models in applications where the objectives are not given explicitly while requiring being modeled in prior.
arXiv Detail & Related papers (2022-07-13T06:44:17Z) - On the Generalization of Neural Combinatorial Optimization Heuristics [0.7049738935364298]
We show that our proposed meta-learning approach significantly improves the generalization of two state-of-the-art models.
We formalize solving a CO problem over a given instance distribution as a separate learning task.
We investigate meta-learning techniques to learn a model on a variety of tasks, in order to optimize its capacity to adapt to new tasks.
arXiv Detail & Related papers (2022-06-01T22:39:35Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Submodular Meta-Learning [43.15332631500541]
We introduce a discrete variant of the meta-learning framework to improve performance on future tasks.
Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set.
We show that our framework leads to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss.
arXiv Detail & Related papers (2020-07-11T21:02:48Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.