OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large
Language Models
- URL: http://arxiv.org/abs/2402.10172v1
- Date: Thu, 15 Feb 2024 18:19:18 GMT
- Title: OptiMUS: Scalable Optimization Modeling with (MI)LP Solvers and Large
Language Models
- Authors: Ali AhmadiTeshnizi, Wenzhi Gao, Madeleine Udell
- Abstract summary: This paper introduces OptiMUS, a Large Language Model (LL)M-based agent designed to formulate and solve (mixed integer) linear programming problems from their natural language descriptions.
OptiMUS can develop mathematical models, write and debug solver code, evaluate the generated solutions, and improve its model and code based on these evaluations.
Experiments demonstrate that OptiMUS outperforms existing state-of-the-art methods on easy datasets by more than $20%$ and on hard datasets by more than $30%$.
- Score: 21.519880445683107
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optimization problems are pervasive in sectors from manufacturing and
distribution to healthcare. However, most such problems are still solved
heuristically by hand rather than optimally by state-of-the-art solvers because
the expertise required to formulate and solve these problems limits the
widespread adoption of optimization tools and techniques. This paper introduces
OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and
solve (mixed integer) linear programming problems from their natural language
descriptions. OptiMUS can develop mathematical models, write and debug solver
code, evaluate the generated solutions, and improve its model and code based on
these evaluations. OptiMUS utilizes a modular structure to process problems,
allowing it to handle problems with long descriptions and complex data without
long prompts. Experiments demonstrate that OptiMUS outperforms existing
state-of-the-art methods on easy datasets by more than $20\%$ and on hard
datasets (including a new dataset, NLP4LP, released with this paper that
features long and complex problems) by more than $30\%$.
Related papers
- Benchmarking LLMs for Optimization Modeling and Enhancing Reasoning via Reverse Socratic Synthesis [60.23133327001978]
Large language models (LLMs) have exhibited their problem-solving ability in mathematical reasoning.
We propose E-OPT, a benchmark for end-to-end optimization problem-solving with human-readable inputs and outputs.
arXiv Detail & Related papers (2024-07-13T13:27:57Z) - Solving General Natural-Language-Description Optimization Problems with Large Language Models [34.50671063271608]
We propose a novel framework called OptLLM that augments LLMs with external solvers.
OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results.
Some features of OptLLM framework have been available for trial since June 2023.
arXiv Detail & Related papers (2024-07-09T07:11:10Z) - Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Functional Graphical Models: Structure Enables Offline Data-Driven
Optimization [121.57202302457135]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - OptiMUS: Optimization Modeling Using MIP Solvers and large language
models [21.519880445683107]
We introduce OptiMUS, a Large Language Model (LLM)-based agent designed to formulate and solve MILP problems from their natural language descriptions.
To benchmark our agent, we present NLP4LP, a novel dataset of linear programming (LP) and mixed integer linear programming (MILP) problems.
Our experiments demonstrate that OptiMUS solves nearly twice as many problems as a basic LLM prompting strategy.
arXiv Detail & Related papers (2023-10-09T19:47:03Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - Diagnosing Infeasible Optimization Problems Using Large Language Models [9.101849365688905]
We introduce OptiChat, a first-of-its-kind natural language-based system equipped with a GUI for engaging in interactive conversations about infeasible optimization models.
OptiChat can provide natural language descriptions of the optimization model itself, identify potential sources of infeasibility, and offer suggestions to make the model feasible.
We utilize few-shot learning, expert chain-of-thought, key-retrieve, and sentiment prompts to enhance OptiChat's reliability.
arXiv Detail & Related papers (2023-08-23T04:34:05Z) - A Framework for Inherently Interpretable Optimization Models [0.0]
Solution of large-scale problems that seemed intractable decades ago are now a routine task.
One major barrier is that the optimization software can be perceived as a black box.
We propose an optimization framework to derive solutions that inherently come with an easily comprehensible explanatory rule.
arXiv Detail & Related papers (2022-08-26T10:32:00Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - A Knowledge Representation Approach to Automated Mathematical Modelling [1.8907108368038215]
We propose a new mixed-integer linear programming (MILP) model ontology and a novel constraint typology of MILP formulations.
MILP is a commonly used mathematical programming technique for modelling and solving real-life scheduling, routing, planning, resource allocation, and timetabling optimization problems.
Our aim is to develop a machine-readable knowledge representation for MILP that allows us to map an end-user's natural language description of the business optimization problem to an MILP formal specification.
arXiv Detail & Related papers (2020-11-12T10:29:57Z) - Model Inversion Networks for Model-Based Optimization [110.24531801773392]
We propose model inversion networks (MINs), which learn an inverse mapping from scores to inputs.
MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.
We evaluate MINs on tasks from the Bayesian optimization literature, high-dimensional model-based optimization problems over images and protein designs, and contextual bandit optimization from logged data.
arXiv Detail & Related papers (2019-12-31T18:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.