OMLT: Optimization & Machine Learning Toolkit
- URL: http://arxiv.org/abs/2202.02414v1
- Date: Fri, 4 Feb 2022 22:23:45 GMT
- Title: OMLT: Optimization & Machine Learning Toolkit
- Authors: Francesco Ceccon, Jordan Jalving, Joshua Haddad, Alexander Thebelt,
Calvin Tsay, Carl D. Laird, Ruth Misener
- Abstract summary: The optimization and machine learning toolkit (OMLT) is an open-source software package incorporating neural network and gradient-boosted tree surrogate models.
We discuss the advances in optimization technology that made OMLT possible and show how OMLT seamlessly integrates with the algebraic modeling language Pyomo.
- Score: 54.58348769621782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimization and machine learning toolkit (OMLT) is an open-source
software package incorporating neural network and gradient-boosted tree
surrogate models, which have been trained using machine learning, into larger
optimization problems. We discuss the advances in optimization technology that
made OMLT possible and show how OMLT seamlessly integrates with the algebraic
modeling language Pyomo. We demonstrate how to use OMLT for solving
decision-making problems in both computer science and engineering.
Related papers
- Towards a Domain-Specific Modelling Environment for Reinforcement Learning [0.13124513975412253]
We use model-driven engineering (MDE) methods and tools for developing a domain-specific modelling environment.
We targeted reinforcement learning from the machine learning domain, and evaluated the proposed language, reinforcement learning modelling language (RLML)
The tool supports syntax-directed editing, constraint checking, and automatic generation of code from RLML models.
arXiv Detail & Related papers (2024-10-12T04:56:01Z) - OptiBench Meets ReSocratic: Measure and Improve LLMs for Optimization Modeling [62.19438812624467]
Large language models (LLMs) have exhibited their problem-solving abilities in mathematical reasoning.
We propose OptiBench, a benchmark for End-to-end optimization problem-solving with human-readable inputs and outputs.
arXiv Detail & Related papers (2024-07-13T13:27:57Z) - Machine Learning Augmented Branch and Bound for Mixed Integer Linear
Programming [11.293025183996832]
Mixed Linear Programming (MILP) offers a powerful modeling language for a wide range of applications.
In recent years, there has been an explosive development in the use of machine learning algorithms for enhancing all main tasks involved in the branch-and-bound algorithm.
In particular, we give detailed attention to machine learning algorithms that automatically optimize some metric of branch-and-bound efficiency.
arXiv Detail & Related papers (2024-02-08T09:19:26Z) - LeTO: Learning Constrained Visuomotor Policy with Differentiable Trajectory Optimization [1.1602089225841634]
This paper introduces LeTO, a method for learning constrained visuomotor policy with differentiable trajectory optimization.
We quantitatively evaluate LeTO in simulation and in the real robot.
arXiv Detail & Related papers (2024-01-30T23:18:35Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.