FFCG: Effective and Fast Family Column Generation for Solving Large-Scale Linear Program
- URL: http://arxiv.org/abs/2412.19066v1
- Date: Thu, 26 Dec 2024 05:35:48 GMT
- Title: FFCG: Effective and Fast Family Column Generation for Solving Large-Scale Linear Program
- Authors: Yi-Xiang Hu, Feng Wu, Shaoang Li, Yifang Zhao, Xiang-Yang Li,
- Abstract summary: Column Generation (CG) is an effective and iterative algorithm to solve large-scale linear programs (LP)
Previous machine-learning-based approaches for CG only add a constant quantity of columns per iteration due to the state-space explosion problem.
We propose Fast Family Column Generation (FFCG) -- a novel reinforcement-learning-based CG that selects a variable number of columns as needed in an iteration.
- Score: 34.55787800577891
- License:
- Abstract: Column Generation (CG) is an effective and iterative algorithm to solve large-scale linear programs (LP). During each CG iteration, new columns are added to improve the solution of the LP. Typically, CG greedily selects one column with the most negative reduced cost, which can be improved by adding more columns at once. However, selecting all columns with negative reduced costs would lead to the addition of redundant columns that do not improve the objective value. Therefore, selecting the appropriate columns to add is still an open problem and previous machine-learning-based approaches for CG only add a constant quantity of columns per iteration due to the state-space explosion problem. To address this, we propose Fast Family Column Generation (FFCG) -- a novel reinforcement-learning-based CG that selects a variable number of columns as needed in an iteration. Specifically, we formulate the column selection problem in CG as an MDP and design a reward metric that balances both the convergence speed and the number of redundant columns. In our experiments, FFCG converges faster on the common benchmarks and reduces the number of CG iterations by 77.1% for Cutting Stock Problem (CSP) and 84.8% for Vehicle Routing Problem with Time Windows (VRPTW), and a 71.4% reduction in computing time for CSP and 84.0% for VRPTW on average compared to several state-of-the-art baselines.
Related papers
- A Greedy Strategy for Graph Cut [95.2841574410968]
We propose a greedy strategy to solve the problem of Graph Cut, called GGC.
It starts from the state where each data sample is regarded as a cluster and dynamically merges the two clusters.
GGC has a nearly linear computational complexity with respect to the number of samples.
arXiv Detail & Related papers (2024-12-28T05:49:42Z) - All You Need is an Improving Column: Enhancing Column Generation for Parallel Machine Scheduling via Transformers [0.0]
We present a neural network-enhanced column generation (CG) approach for a parallel machine scheduling problem.
By training the neural network offline and using it in inference mode to predict negative reduced costs columns, we achieve significant computational time savings.
For large-sized instances, the proposed approach achieves an 80% improvement in the objective value in under 500 seconds.
arXiv Detail & Related papers (2024-10-21T02:53:37Z) - Machine Learning-Enhanced Ant Colony Optimization for Column Generation [5.82410475933163]
Column generation is a powerful technique for solving optimization problems that involve a large number of variables or columns.
We propose a novel method called machine learning enhanced ant colony optimization (MLACO), to efficiently generate multiple high-quality columns from a subproblem.
arXiv Detail & Related papers (2024-04-23T01:00:09Z) - A Reinforcement-Learning-Based Multiple-Column Selection Strategy for
Column Generation [33.03891490706789]
Column generation is one of the most successful approaches for solving large-scale linear programming problems.
We propose a novel reinforcement-learning-based (RL) multiple-column selection strategy.
Our approach is evaluated on two sets of problems: the cutting stock problem and the graph coloring problem.
arXiv Detail & Related papers (2023-12-21T11:35:10Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - A Deep Reinforcement Learning Framework For Column Generation [13.767526395378638]
Column Generation (CG) is an iterative algorithm for solving linear programs with an extremely large number of variables (columns)
We propose RLCG, the first Reinforcement Learning (RL) approach for CG.
Unlike typical column selection rules which myopically select a column based on local information at each iteration, we treat CG as a sequential decision-making problem.
arXiv Detail & Related papers (2022-06-03T03:58:54Z) - Enhancing Column Generation by a Machine-Learning-Based Pricing
Heuristic for Graph Coloring [5.278929511653199]
Column Generation (CG) is an effective method for solving large-scale optimization problems.
We propose a Machine-Learning Pricing Heuristic that can generate many high-quality columns efficiently.
arXiv Detail & Related papers (2021-12-08T03:58:25Z) - Parameter-free Locally Accelerated Conditional Gradients [91.19349793915615]
We introduce a novel,.
Free Locally accelerated CG (PF-LaCG) algorithm, for which we provide rigorous convergence guarantees.
Our theoretical results are complemented by numerical experiments, which demonstrate local acceleration and showcase the practical improvements of PF-LaCG over non-accelerated algorithms.
arXiv Detail & Related papers (2021-02-12T22:50:01Z) - Non-Parametric Adaptive Network Pruning [125.4414216272874]
We introduce non-parametric modeling to simplify the algorithm design.
Inspired by the face recognition community, we use a message passing algorithm to obtain an adaptive number of exemplars.
EPruner breaks the dependency on the training data in determining the "important" filters.
arXiv Detail & Related papers (2021-01-20T06:18:38Z) - Gradient Coding with Dynamic Clustering for Straggler Mitigation [57.9123881133818]
GC-DC regulates the number of straggling workers in each cluster based on the straggler behavior in the previous iteration.
We numerically show that GC-DC provides significant improvements in the average completion time (of each iteration) with no increase in the communication load compared to the original GC scheme.
arXiv Detail & Related papers (2020-11-03T18:52:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.