OptiMUS: Optimization Modeling Using MIP Solvers and large language
models
- URL: http://arxiv.org/abs/2310.06116v2
- Date: Mon, 30 Oct 2023 18:23:45 GMT
- Title: OptiMUS: Optimization Modeling Using MIP Solvers and large language
models
- Authors: Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell
- Abstract summary: We introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from their natural language descriptions.
To benchmark our agent, we present NLP4LP, a novel dataset of linear programming (LP) and mixed integer linear programming (MILP) problems.
Our experiments demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM prompting strategy.
- Score: 21.519880445683107
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Optimization problems are pervasive across various sectors, from
manufacturing and distribution to healthcare. However, most such problems are
still solved heuristically by hand rather than optimally by state-of-the-art
solvers, as the expertise required to formulate and solve these problems limits
the widespread adoption of optimization tools and techniques. We introduce
OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and
solve MILP problems from their natural language descriptions. OptiMUS is
capable of developing mathematical models, writing and debugging solver code,
developing tests, and checking the validity of generated solutions. To
benchmark our agent, we present NLP4LP, a novel dataset of linear programming
(LP) and mixed integer linear programming (MILP) problems. Our experiments
demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM
prompting strategy. OptiMUS code and NLP4LP dataset are available at
\href{https://github.com/teshnizi/OptiMUS}{https://github.com/teshnizi/OptiMUS}
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - LLMOPT: Learning to Define and Solve General Optimization Problems from Scratch [16.174567164068037]
We propose a unified learning-based framework called LLMOPT to boost optimization generalization.
LLMOPT constructs the introduced five-element formulation as a universal model for learning to define diverse optimization problem types.
We evaluate the optimization generalization ability of LLMOPT and compared methods across six real-world datasets.
arXiv Detail & Related papers (2024-10-17T04:37:37Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - OptiMUS-0.3: Using Large Language Models to Model and Solve Optimization Problems at Scale [16.33736498565436]
We introduce a Large Language Model (LLM)-based system designed to formulate and solve linear programming problems from their natural language descriptions.
Our system is capable of developing mathematical models, writing and debugning solver code, evaluating the generated solutions, and improving efficiency and correctness of its model and code.
Experiments demonstrate that OptiMUS-0.3 outperforms existing state-of-the-art methods on easy datasets by more than 12% and on hard datasets by more than 8%.
arXiv Detail & Related papers (2024-07-29T01:31:45Z) - OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling [62.19438812624467]
Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning.
We propose OptiBench, a benchmark for End-to-end optimization problem-solving with human-readable inputs and outputs.
arXiv Detail & Related papers (2024-07-13T13:27:57Z) - Solving General Natural-Language-Description Optimization Problems with Large Language Models [34.50671063271608]
We propose a novel framework called OptLLM that augments LLMs with external solvers.
OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results.
Some features of OptLLM framework have been available for trial since June 2023.
arXiv Detail & Related papers (2024-07-09T07:11:10Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large
Language Models [21.519880445683107]
This paper introduces OptiMUS, a Large Language Model (LL)M-based agent designed to formulate and solve (mixed integer) linear programming problems from their natural language descriptions.
OptiMUS can develop mathematical models, write and debug solver code, evaluate the generated solutions, and improve its model and code based on these evaluations.
Experiments demonstrate that OptiMUS outperforms existing state-of-the-art methods on easy datasets by more than $20%$ and on hard datasets by more than $30%$.
arXiv Detail & Related papers (2024-02-15T18:19:18Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - OMLT: Optimization & Machine Learning Toolkit [54.58348769621782]
The optimization and machine learning toolkit (OMLT) is an open-source software package incorporating neural network and gradient-boosted tree surrogate models.
We discuss the advances in optimization technology that made OMLT possible and show how OMLT seamlessly integrates with the algebraic modeling language Pyomo.
arXiv Detail & Related papers (2022-02-04T22:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.