Tightening Discretization-based MILP Models for the Pooling Problem
using Upper Bounds on Bilinear Terms
- URL: http://arxiv.org/abs/2207.03699v1
- Date: Fri, 8 Jul 2022 05:28:59 GMT
- Title: Tightening Discretization-based MILP Models for the Pooling Problem
using Upper Bounds on Bilinear Terms
- Authors: Yifu Chen, Christos T. Maravelias, Xiaomin Zhang
- Abstract summary: Discretization-based methods have been proposed for solving non optimization problems with bi-linear terms.
This paper shows that discretization-based MILP models can be used to solve the pooling problem.
- Score: 2.6253445491808307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discretization-based methods have been proposed for solving nonconvex
optimization problems with bilinear terms. These methods convert the original
nonconvex optimization problems into mixed-integer linear programs (MILPs).
Compared to a wide range of studies related to methods to convert nonconvex
optimization problems into MILPs, research on tightening the resulting MILP
models is limited. In this paper, we present tightening constraints for the
discretization-based MILP models for the pooling problem. Specifically, we
study tightening constraints derived from upper bounds on bilinear term and
exploiting the structures resulting from the discretization. We demonstrate the
effectiveness of our constraints, showing computational results for MILP models
derived from different formulations for (1) the pooling problem and (2)
discretization-based pooling models. Computational results show that our
methods reduce the computational time for MILP models on CPLEX 12.10. Finally,
we note that while our methods are presented in the context of the pooling
problem, they can be extended to address other nonconvex optimization problems
with upper bounds on bilinear terms.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Maximum a Posteriori Inference for Factor Graphs via Benders' Decomposition [0.38233569758620056]
We present a method for maximum a-posteriori inference in general Bayesian factor models.
We derive MAP estimation algorithms for the Bayesian Gaussian mixture model and latent Dirichlet allocation.
arXiv Detail & Related papers (2024-10-24T19:57:56Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Optimized QUBO formulation methods for quantum computing [0.4999814847776097]
We show how to apply our techniques in case of an NP-hard optimization problem inspired by a real-world financial scenario.
We follow by submitting instances of this problem to two D-wave quantum annealers, comparing the performances of our novel approach with the standard methods used in these scenarios.
arXiv Detail & Related papers (2024-06-11T19:59:05Z) - Deriving Compact QUBO Models via Multilevel Constraint Transformation [0.8192907805418583]
We propose a novel Multilevel Constraint Transformation Scheme (MLCTS) that derives QUBO models with fewer ancillary binary variables.
For a proof-of-concept, we compare the performance of two QUBO models for the latter problem on both a general-purpose software-based solver and a hardware-based QUBO solver.
The MLCTS-derived models demonstrate significantly better performance for both solvers, in particular, solving up to seven times more instances with the hardware-based approach.
arXiv Detail & Related papers (2024-04-04T17:34:08Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Consistent Second-Order Conic Integer Programming for Learning Bayesian
Networks [2.7473982588529653]
We study the problem of learning the sparse DAG structure of a BN from continuous observational data.
The optimal solution to this mathematical program is known to have desirable statistical properties under certain conditions.
We propose a concrete early stopping criterion to terminate the branch-and-bound process in order to obtain a near-optimal solution.
arXiv Detail & Related papers (2020-05-29T00:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.