A Novel Approach for Auto-Formulation of Optimization Problems
- URL: http://arxiv.org/abs/2302.04643v1
- Date: Thu, 9 Feb 2023 13:57:06 GMT
- Title: A Novel Approach for Auto-Formulation of Optimization Problems
- Authors: Yuting Ning, Jiayu Liu, Longhu Qin, Tong Xiao, Shangzi Xue, Zhenya
Huang, Qi Liu, Enhong Chen, Jinze Wu
- Abstract summary: In the Natural Language for Optimization (NL4Opt) NeurIPS 2022 competition, competitors focus on improving the accessibility and usability of optimization solvers.
In this paper, we present the solution of our team.
Our proposed methods have achieved the F1-score of 0.931 in subtask 1 and the accuracy of 0.867 in subtask 2, which won the fourth and third places respectively in this competition.
- Score: 66.94228200699997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Natural Language for Optimization (NL4Opt) NeurIPS 2022 competition,
competitors focus on improving the accessibility and usability of optimization
solvers, with the aim of subtask 1: recognizing the semantic entities that
correspond to the components of the optimization problem; subtask 2: generating
formulations for the optimization problem. In this paper, we present the
solution of our team. First, we treat subtask 1 as a named entity recognition
(NER) problem with the solution pipeline including pre-processing methods,
adversarial training, post-processing methods and ensemble learning. Besides,
we treat subtask 2 as a generation problem with the solution pipeline including
specially designed prompts, adversarial training, post-processing methods and
ensemble learning. Our proposed methods have achieved the F1-score of 0.931 in
subtask 1 and the accuracy of 0.867 in subtask 2, which won the fourth and
third places respectively in this competition. Our code is available at
https://github.com/bigdata-ustc/nl4opt.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - NL4Opt Competition: Formulating Optimization Problems Based on Their
Natural Language Descriptions [19.01388243205877]
The goal of the competition is to increase the accessibility and usability of optimization solvers by allowing non-experts to interface with them using natural language.
We present the LP word problem dataset and shared tasks for the NeurIPS 2022 competition.
arXiv Detail & Related papers (2023-03-14T20:59:04Z) - Tensor Train for Global Optimization Problems in Robotics [6.702251803443858]
The convergence of many numerical optimization techniques is highly dependent on the initial guess given to the solver.
We propose a novel approach that utilizes methods to initialize existing optimization solvers near global optima.
We show that the proposed method can generate samples close to global optima and from multiple modes.
arXiv Detail & Related papers (2022-06-10T13:18:26Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Learning to Optimize Under Constraints with Unsupervised Deep Neural
Networks [0.0]
We propose a machine learning (ML) method to learn how to solve a generic constrained continuous optimization problem.
In this paper, we propose an unsupervised deep learning (DL) solution for solving constrained optimization problems in real-time.
arXiv Detail & Related papers (2021-01-04T02:58:37Z) - Contrastive Losses and Solution Caching for Predict-and-Optimize [19.31153168397003]
We use a Noise Contrastive approach to motivate a family of surrogate loss functions.
We address a major bottleneck of all predict-and-optimize approaches.
We show that even a very slow growth rate is enough to match the quality of state-of-the-art methods.
arXiv Detail & Related papers (2020-11-10T19:09:12Z) - Combining Reinforcement Learning and Constraint Programming for
Combinatorial Optimization [5.669790037378094]
The goal is to find an optimal solution among a finite set of possibilities.
Deep reinforcement learning (DRL) has shown its promise for solving NP-hard optimization problems.
constraint programming (CP) is a generic tool to solve optimization problems.
In this work, we propose a general and hybrid approach, based on DRL and CP, for solving optimization problems.
arXiv Detail & Related papers (2020-06-02T13:54:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.