NL4Opt Competition: Formulating Optimization Problems Based on Their
Natural Language Descriptions
- URL: http://arxiv.org/abs/2303.08233v2
- Date: Mon, 27 Mar 2023 01:10:12 GMT
- Title: NL4Opt Competition: Formulating Optimization Problems Based on Their
Natural Language Descriptions
- Authors: Rindranirina Ramamonjison, Timothy T. Yu, Raymond Li, Haley Li,
Giuseppe Carenini, Bissan Ghaddar, Shiqi He, Mahdi Mostajabdaveh, Amin
Banitalebi-Dehkordi, Zirui Zhou, Yong Zhang
- Abstract summary: The goal of the competition is to increase the accessibility and usability of optimization solvers by allowing non-experts to interface with them using natural language.
We present the LP word problem dataset and shared tasks for the NeurIPS 2022 competition.
- Score: 19.01388243205877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Natural Language for Optimization (NL4Opt) Competition was created to
investigate methods of extracting the meaning and formulation of an
optimization problem based on its text description. Specifically, the goal of
the competition is to increase the accessibility and usability of optimization
solvers by allowing non-experts to interface with them using natural language.
We separate this challenging goal into two sub-tasks: (1) recognize and label
the semantic entities that correspond to the components of the optimization
problem; (2) generate a meaning representation (i.e., a logical form) of the
problem from its detected problem entities. The first task aims to reduce
ambiguity by detecting and tagging the entities of the optimization problems.
The second task creates an intermediate representation of the linear
programming (LP) problem that is converted into a format that can be used by
commercial solvers. In this report, we present the LP word problem dataset and
shared tasks for the NeurIPS 2022 competition. Furthermore, we investigate and
compare the performance of the ChatGPT large language model against the winning
solutions. Through this competition, we hope to bring interest towards the
development of novel machine learning applications and datasets for
optimization modeling.
Related papers
- Autoformulation of Mathematical Optimization Models Using LLMs [50.030647274271516]
We develop an automated approach to creating optimization models from natural language descriptions for commercial solvers.
We identify the three core challenges of autoformulation: (1) defining the vast, problem-dependent hypothesis space, (2) efficiently searching this space under uncertainty, and (3) evaluating formulation correctness.
arXiv Detail & Related papers (2024-11-03T20:41:38Z) - Solving General Natural-Language-Description Optimization Problems with Large Language Models [34.50671063271608]
We propose a novel framework called OptLLM that augments LLMs with external solvers.
OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results.
Some features of OptLLM framework have been available for trial since June 2023.
arXiv Detail & Related papers (2024-07-09T07:11:10Z) - SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving [64.38649623473626]
Large Language Models (LLMs) have driven substantial progress in artificial intelligence.
We propose a novel framework called textbfSEquential subtextbfGoal textbfOptimization (SEGO) to enhance LLMs' ability to solve mathematical problems.
arXiv Detail & Related papers (2023-10-19T17:56:40Z) - AI-Copilot for Business Optimisation: A Framework and A Case Study in
Production Scheduling [3.522755287096529]
We propose an AI-Copilot for business optimisation problem formulation.
For token limitations, we introduce modularization and prompt engineering techniques.
We design performance evaluation metrics that are better suited for assessing the accuracy and quality of problem formulations.
arXiv Detail & Related papers (2023-09-22T23:45:21Z) - A Novel Approach for Auto-Formulation of Optimization Problems [66.94228200699997]
In the Natural Language for Optimization (NL4Opt) NeurIPS 2022 competition, competitors focus on improving the accessibility and usability of optimization solvers.
In this paper, we present the solution of our team.
Our proposed methods have achieved the F1-score of 0.931 in subtask 1 and the accuracy of 0.867 in subtask 2, which won the fourth and third places respectively in this competition.
arXiv Detail & Related papers (2023-02-09T13:57:06Z) - Visualizing the Relationship Between Encoded Linguistic Information and
Task Performance [53.223789395577796]
We study the dynamic relationship between the encoded linguistic information and task performance from the viewpoint of Pareto Optimality.
We conduct experiments on two popular NLP tasks, i.e., machine translation and language modeling, and investigate the relationship between several kinds of linguistic information and task performances.
Our empirical findings suggest that some syntactic information is helpful for NLP tasks whereas encoding more syntactic information does not necessarily lead to better performance.
arXiv Detail & Related papers (2022-03-29T19:03:10Z) - Learning MDPs from Features: Predict-Then-Optimize for Sequential
Decision Problems by Reinforcement Learning [52.74071439183113]
We study the predict-then-optimize framework in the context of sequential decision problems (formulated as MDPs) solved via reinforcement learning.
Two significant computational challenges arise in applying decision-focused learning to MDPs.
arXiv Detail & Related papers (2021-06-06T23:53:31Z) - Perceptual reasoning based solution methodology for linguistic
optimization problems [13.548237279353408]
linguistic optimization problems (LOPs) are of two types, single objective linguistic optimization problems (SOLOPs) and multi-objective linguistic optimization problems (MOLOPs)
The use of linguistic information inevitably calls for the utilization of computing with words (CWW), and therefore, 2-tuple linguistic model based solution methodologies were proposed for LOPs.
We found that 2-tuple linguistic model based solution methodologies represent the semantics of the linguistic information using a combination of type-1 fuzzy sets and ordinal term sets.
arXiv Detail & Related papers (2020-04-30T16:35:01Z) - Model Inversion Networks for Model-Based Optimization [110.24531801773392]
We propose model inversion networks (MINs), which learn an inverse mapping from scores to inputs.
MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.
We evaluate MINs on tasks from the Bayesian optimization literature, high-dimensional model-based optimization problems over images and protein designs, and contextual bandit optimization from logged data.
arXiv Detail & Related papers (2019-12-31T18:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.