Code Retrieval for MILP Instance Generation
- URL: http://arxiv.org/abs/2505.11526v1
- Date: Sun, 11 May 2025 10:41:44 GMT
- Title: Code Retrieval for MILP Instance Generation
- Authors: Tianxing Yang, Huigen Ye, Hua Xu,
- Abstract summary: Mixed-Integer Linear Programming (MILP) is widely used in fields such as scheduling, logistics, and planning.<n>Existing methods for MILP instance generation typically necessitate training a separate model for each problem class.<n>We reformulate the MILP Instance Generation task as MILP Code Generation task, enabling efficient, flexible, and interpretable instance generation through code.
- Score: 9.289277282074776
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Mixed-Integer Linear Programming (MILP) is widely used in fields such as scheduling, logistics, and planning. Enhancing the performance of MILP solvers, particularly learning-based solvers, requires substantial amounts of high-quality data. However, existing methods for MILP instance generation typically necessitate training a separate model for each problem class and are computationally intensive when generating new instances. To address these limitations, we reformulate the MILP Instance Generation task as MILP Code Generation task, enabling efficient, flexible, and interpretable instance generation through code. Since MILP instances generated from code can vary significantly in scale, we introduce MILP-EmbedSim, a new similarity metric that accurately measures the similarity between instances of varying sizes within the same problem class. Leveraging this metric, we propose MILP-Retrieval, a pipeline that retrieves generation code from library to produce MILP instances highly similar to target instance. MILP-Retrieval outperforms baselines in both MILP Code Generation and Instance Generation tasks, provides a novel perspective on MILP instance generation and opens new possibilities for learning-based solvers.
Related papers
- MergeBench: A Benchmark for Merging Domain-Specialized LLMs [19.49737955489798]
We introduce MergeBench, a comprehensive evaluation suite designed to assess model merging at scale.<n> MergeBench builds on state-of-the-art open-source language models, including Llama and Gemma families at 2B to 9B scales.<n>We assess eight representative merging methods across multi-task performance, forgetting and runtime efficiency.
arXiv Detail & Related papers (2025-05-16T04:02:55Z) - Leveraging Metamemory Mechanisms for Enhanced Data-Free Code Generation in LLMs [44.80420740455364]
M2WF is a framework for improving large language models' one-time code generation.<n>Unlike prior methods, it minimizes dependency on curated data and adapts to various coding scenarios.<n>The code and framework will be publicly available on GitHub and HuggingFace.
arXiv Detail & Related papers (2025-01-14T07:16:43Z) - MILP-StuDio: MILP Instance Generation via Block Structure Decomposition [55.79888361191114]
Mixed-integer linear programming (MILP) is one of the most popular mathematical formulations with numerous applications.
We propose a novel MILP generation framework, called Block Structure Decomposition (MILP-StuDio), to generate high-quality instances by preserving the block structures.
arXiv Detail & Related papers (2024-10-30T08:33:27Z) - Towards Foundation Models for Mixed Integer Linear Programming [15.064109397239086]
Current deep learning approaches for MILP focus on specific problem classes and do not generalize to unseen classes.<n>We introduce MILP-Evolve, a novel evolutionary framework that is capable of generating a large set of diverse MILP classes.<n>Our empirical results show that models trained on the data generated by MILP-Evolve achieve significant improvements on unseen problems.
arXiv Detail & Related papers (2024-10-10T18:20:44Z) - POGEMA: A Benchmark Platform for Cooperative Multi-Agent Pathfinding [76.67608003501479]
We introduce POGEMA, a comprehensive set of tools that includes a fast environment for learning, a problem instance generator, and a visualization toolkit.<n>We also introduce and define an evaluation protocol that specifies a range of domain-related metrics, computed based on primary evaluation indicators.<n>The results of this comparison, which involves a variety of state-of-the-art MARL, search-based, and hybrid methods, are presented.
arXiv Detail & Related papers (2024-07-20T16:37:21Z) - DIG-MILP: a Deep Instance Generator for Mixed-Integer Linear Programming
with Feasibility Guarantee [47.11455377400096]
Mixed-integer linear programming (MILP) stands as a notable NP-hard problem pivotal to numerous crucial industrial applications.
We present DIG-MILP, a deep generative framework based on variational auto-encoder (VAE), adept at extracting deep-level structural features from highly limited MILP data.
arXiv Detail & Related papers (2023-10-20T03:45:29Z) - A Deep Instance Generative Framework for MILP Solvers Under Limited Data
Availability [66.37474135424637]
We propose G2MILP, the first deep generative framework for MILP instances.
G2MILP represents MILP instances as bipartite graphs, and applies a masked variational autoencoder to iteratively corrupt and replace parts of the original graphs to generate new ones.
We design a suite of benchmarks to evaluate the quality of the generated MILP instances.
arXiv Detail & Related papers (2023-10-04T13:34:34Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.